Knowledge of the Human Visual System (HVS) may be exploited in computer graphics to significantly reduce rendering times without the viewer being aware of any resultant image quality difference. Furthermore, cross-modal effects, that is the influence of one sensory input on another, for example sound and visuals, have also recently been shown to have a substantial impact on viewer perception of image quality.In this paper we investigate the relationship between audio beat rate and video frame rate in order to manipulate temporal visual perception. This represents an initial step towards establishing a comprehensive understanding for the audio-visual integration in multisensory environments.
The natural world presents our visual system with a wide, everchanging range of colors and intensities. Existing video cameras are only capable of capturing a limited part of this wide range with sufficient resolution. High-dynamic-range (HDR) images can represent most of the real world's luminances, but until now capturing HDR images with a linear-response function has been limited to static scenes. This demonstration showcases a novel complete HDR video solution. The system includes a unique HDR video camera capable of capturing a full HDTV video stream consisting of 20 f-stops dynamic range at a resolution of 1920 x 1080 pixels at 30 frames per second; an encoding method for coping with the huge amount of data generated by the camera (achieving a compression ratio of up to 100:1 and real-time decompression); and a new 22-inch desktop HDR display for directly visualizing the dynamic HDR content.This HDR video solution should be of great interest to cinematographers. The camera accurately captures real-world lighting, from lions moving in deep shadow on the bright African veldt to recording surgery with its vast range of lighting from dark body cavities to bright operating-theater lights. In addition, HDR video content can be incorporated into dynamic visualization systems, allowing virtual objects to be viewed under dynamic real-world settings. So, for example, rather than taking a physical mock-up of a proposed new car to a remote location to produce advertising material, a camera crew can take the HDR video system to the location and capture the desired lighting and environment, including any moving objects (such as clouds, people, etc.), then combine the video material with the car CAD model and paint BRDFs to produce highly compelling imagery.
Mobile devices, also known as small-form-factor (SFF) devices such as mobile phones, PDAs and ultra mobile PCs have continued to grow in popularity. Improvements in SFF hardware has enabled a range of suitable applications such as gaming, interactive visualisation and mobile mapping. Although high-fidelity graphic systems typically have significant computational requirements, the time taken may be largely resolution dependent. The limited resolution of SFFs indicates such platforms are prime candidates for running high-fidelity graphics.Due to the limited hardware available on mobile devices, it is not currently possible to produce high-fidelity graphics in reasonable time. However, most SFFs have some degree of network capability. Using a remote server in conjunction with a mobile device to render high-fidelity graphics on demand allows us to substantially reduce the total rendering time. This paper introduces a client-server framework for minimising rendering times using a cost function to predict optimal distribution of rendering.
High-fidelity computer graphics offer the possibility for archaeologists to put excavated cultural heritage artefacts virtually back into their original setting and illumination conditions. This enables hypotheses about the perception of objects and their environments to be investigated in a safe and controlled manner. This paper presents a case study of the pipeline for the acquisition, modelling, rapid prototyping and virtual relighting of a Roman statue head preserved at Herculaneum in Italy. The statue head was excavated in 2006, after having been buried during the eruption of Mount Vesuvius in AD79.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.