International audienceTone Mapping Operators (TMOs) aim at converting real world high dynamic range (HDR) images captured withHDR cameras, into low dynamic range (LDR) images that can be displayed on LDR displays. Several TMOshave been proposed over the last decade, from the simple global mapping to the more complex one simulating thehuman vision system. While these solutions work generally well for still pictures, they are usually less efficient forvideo sequences as they are source of visual artifacts. Only few of them can be adapted to cope with a sequenceof images. In this paper we present a major problem that a static TMO usually encounters while dealing withvideo sequences, namely the temporal coherency. Indeed, as each tone mapper deals with each frame separately,no temporal coherency is taken into account and hence the results can be quite disturbing for high varyingdynamics in a video. We propose a temporal coherency algorithm that is designed to analyze a video as a whole,and from its characteristics adapts each tone mapped frame of a sequence in order to preserve the temporalcoherency. This temporal coherency algorithm has been tested on a set of real as well as Computer GraphicsImage (CGI) content and put in competition with several algorithms that are designed to be time-dependent.Results show that temporal coherency preserves the overall contrast in a sequence of images. Furthermore, thistechnique is applicable to any TMO as it is a post-processing that only depends on the used TMO
Abstract-This paper studies the design and application of a novel visual attention model meant to compute users gaze position automatically, i.e. without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute, in real-time, a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive process which takes place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines the bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with more than 100% of accuracy gained. This suggests that computing in real-time a gaze point in a 3D virtual environment is possible and is a valid approach as compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multipletexture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high refresh rate. Second, we introduce the use of visual attention model in three visual effects inspired from the human visual system namely: depth-of-field blur, camera motions, and dynamic luminance. All these effects are computed based on simulated user's gaze, and are meant to improve user's sensations in future virtual reality applications.Index Terms-visual attention model, first person exploration, gaze tracking, visual effects, level of detail.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.