Motion blur is a frequent requirement for the rendering of high-quality animated images. However, the computational resources involved are usually higher than those for images that have not been temporally antialiased. In this article we study the influence of high-level properties such as object material and speed, shutter time, and antialiasing level. Based on scenes containing variations of these parameters, we design different psychophysical experiments to determine how influential they are in the perception of image quality.This work gives insights on the effects these parameters have and exposes certain situations where motion blurred stimuli may be indistinguishable from a gold standard. As an immediate practical application, images of similar quality can be produced while the computing requirements are reduced.Algorithmic efforts have traditionally been focused on finding new improved methods to alleviate sampling artifacts by steering computation to the most important dimensions of the rendering equation. Concurrently, rendering algorithms can take advantage of certain perceptual limits to simplify and optimize computations. To our knowledge, none of them has identified nor used these limits in the rendering of motion blur. This work can be considered a first step in that direction.
We present a new pipeline to enable head-motion parallax in omnidirectional stereo (ODS) panorama video rendering using a neural depth decoder. While recent ODS panorama cameras record short-baseline horizontal stereo parallax to offer the impression of binocular depth, they do not support the necessary translational degrees-of-freedom (DoF) to also provide for head-motion parallax in virtual reality (VR) applications. To overcome this limitation, we propose a pipeline that enhances the classical ODS panorama format with 6 DoF free-viewpoint rendering by decomposing the scene into a multi-layer mesh representation. Given a spherical stereo panorama video, we use the horizontal disparity to store explicit depth information for both eyes in a simple neural decoder architecture. While this approach produces reasonable results for individual frames, video rendering usually suffers from temporal depth inconsistencies. Thus, we perform successive optimization to improve temporal consistency by fine-tuning our depth decoder for both temporal and spatial smoothness. Using a consumer-grade ODS camera, we evaluate our approach on a number of real-world scene recordings and demonstrate the versatility and robustness of the proposed pipeline.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.