In recent years many Tone Mapping Operators (TMOs) have been presented in order to display High Dynamic Range Images (HDRI) on typical display devices. TMOs compress the luminance range while trying to maintain contrast. The dual of tone mapping, inverse tone mapping, expands a Low Dynamic Range Image (LDRI) into a HDRI. HDRIs contain a broader range of physical values that can be perceived by the human visual system. The majority of today's media is stored in low dynamic range. Inverse Tone Mapping Operators (iTMOs) could thus potentially revive all of this content for use in high dynamic range display and image-based lighting. We propose an approximate solution to this problem that uses mediancut to find the areas considered of high luminance and subsequently apply a density estimation to generate an Expand-map in order to extend the range in the high luminance areas using an inverse Photographic Tone Reproduction operator.
Simulation fidelity is characterized as the extent to which a Virtual Environment (VE) and relevant interactions with it are indistinguishable from a user's interaction with a real environment. The growing number of VE training applications which target a high level of simulation fidelity, mainly for transfer of training in the real world, have made it crucial to examine the manner in which these particular implementations and designs are evaluated. The methodology presented in this study focuses on real versus simulated virtual worlds, comparing participants' level of presence, task performance, and cognition state employed to complete a memory task. A 15-minute seminar was presented in four different conditions including real, 3D desktop, 3D Head Mounted Display (HMD) and Audio-only (between-subjects design). Four independent groups of 18 participants took part in the experiment, which investigated the effects of levels of immersion on participants' memory recall and memory awareness state (relevant to episodic and semantic memory types) as well as on their perception of the experimental space and sense of presence for every condition. The level of reported presence was not positively associated with accurate memory recall in all conditions, although the scores for both presence and seminar memory recall in the "real" condition were statistically higher. Memory awareness states' analysis gave a invaluable insight into "how" participants remembered both communicated information and space, as opposed to "what," most interestingly across specific conditions where results for presence and accurate memory recall were not proven to be significant.
Tone mapping operators are designed to reproduce visibility and the overall impression of brightness, contrast and color of the real world onto limited dynamic range displays and printers. Although many tone mapping operators have been published in recent years, no thorough psychophysical experiments have yet been undertaken to compare such operators against the real scenes they are purporting to depict. In this paper, we present the results of a series of psychophysical experiments to validate six frequently used tone mapping operators against linearly mapped High Dynamic Range (HDR) scenes displayed on a novel HDR device. Individual operators address the tone mapping issue using a variety of approaches and the goals of these techniques are often quite different from one another. Therefore, the purpose of this investigation was not simply to determine which is the "best" algorithm, but more generally to propose an experimental methodology to validate such operators and to determine the participants' impressions of the images produced compared to what is visible on a high contrast ratio display.
The computation of high-fidelity images in real-time remains one of the key challenges for computer graphics. Recent work has shown that by understanding the human visual system, selective rendering may be used to render only those parts to which the human viewer is attending at high quality and the rest of the scene at a much lower quality. This can result in a significant reduction in computational time, without the viewer being aware of the quality difference. Selective rendering is guided by models of the human visual system, typically in the form of a 2D saliency map, which predict where the user will be looking in any scene. Computation of these maps themselves often take many seconds, thus precluding such an approach in any interactive system, where many frames need to be rendered per second. In this paper we present a novel saliency map which exploits the computational performance of modern GPUs. With our approach it is thus possible to calculate this map in milliseconds, allowing it to be part of a real time rendering system. In addition, we also show how depth, habituation and motion can be added to the saliency map to further guide the selective rendering. This ensures that only the most perceptually important parts of any animated sequence need be rendered in high quality. The rest of the animation can be rendered at a significantly lower quality, and thus much lower computational cost, without the user being aware of this difference.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.