Abstract-This paper studies the design and application of a novel visual attention model meant to compute users gaze position automatically, i.e. without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute, in real-time, a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive process which takes place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines the bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with more than 100% of accuracy gained. This suggests that computing in real-time a gaze point in a 3D virtual environment is possible and is a valid approach as compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multipletexture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high refresh rate. Second, we introduce the use of visual attention model in three visual effects inspired from the human visual system namely: depth-of-field blur, camera motions, and dynamic luminance. All these effects are computed based on simulated user's gaze, and are meant to improve user's sensations in future virtual reality applications.Index Terms-visual attention model, first person exploration, gaze tracking, visual effects, level of detail.
In this paper we analyze and try to predict the gaze behavior of users navigating in virtual environments. We focus on first-person navigation in virtual environments which involves forward and backward motions on a ground-surface with turns toward the left or right. We found that gaze behavior in virtual reality, with input devices like mice and keyboards, is similar to the one observed in real life. Participants anticipated turns as in real life conditions, i.e. when they can actually move their body and head. We also found influences of visual occlusions and optic flow similar to the ones reported in existing literature on real navigations. Then, we propose three simple gaze prediction models taking as input: (1) the motion of the user as given by the rotation velocity of the camera on the yaw axis (considered here as the virtual heading direction), and/or (2) the optic flow on screen. These models were tested with data collected in various virtual environments. Results show that these models can significantly improve the prediction of gaze position on screen, especially when turning, in the virtual environment. The model based on rotation velocity of the camera seems to be the best trade-off between simplicity and efficiency. We suggest that these models could be used in several interactive applications using gaze point as input. They could also be used as a new top-down component in any existing visual attention model.
No abstract
Abstract-Rendering realistic organic materials is a challenging issue. The human eye is an important part of nonverbal communication which, consequently, requires specific modeling and rendering techniques to enhance the realism of virtual characters. We propose an image-based method for estimating both iris morphology and scattering features in order to generate convincing images of virtual eyes. In this regard, we develop a technique to unrefract iris photographs. We model the morphology of the human iris as an irregular multilayered tissue. We then approximate the scattering features of the captured iris. Finally, we propose a real-time rendering technique based on the subsurface texture mapping representation and introduce a precomputed refraction function as well as a caustic function, which accounts for the light interactions at the corneal interface.Index Terms-Three-dimensional graphics and realism, image-based rendering, physically based modeling.
We describe here the control, shape and appearance models that are built using an original photogrammetric method to capture characteristics of speaker-specific facial articulation, anatomy, and texture. Two original contributions are put forward here: the trainable trajectory formation model that predicts articulatory trajectories of a talking face from phonetic input and the texture model that computes a texture for each 3D facial shape according to articulation. Using motion capture data from different speakers and module-specific evaluation procedures, we show here that this cloning system restores detailed idiosyncrasies and the global coherence of visible articulation. Results of a subjective evaluation of the global system with competing trajectory formation models are further presented and commented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.