In real outdoor scenes, objects distant from the observer suffer from a natural effect called aerial perspective that fades the colors of the objects and blends them to the environmental light color. The aerial perspective can be modeled using a physics-based approach; however, handling with the changing and unpredictable environmental illumination as well as the weather conditions of real scenes is challenging in terms of visual coherence and computational cost. In those cases, even state-of-the-art models fail to generate realistic synthesized aerial perspective effects. To overcome this limitation, we propose a real-time, turbidity-based, full-spectrum aerial perspective rendering approach. First, we estimate the atmospheric turbidity by matching luminance distributions of a captured sky image to sky models. The obtained turbidity is then employed for aerial perspective rendering using an improved scattering model. We performed a set of experiments to evaluate the scattering model and the aerial perspective model. We also provide a framework for real-time aerial perspective rendering. The results confirm that the proposed approach synthesizes realistic aerial perspective effects with low computational cost, outperforming state-of-the-art aerial perspective rendering methods for real scenes.
This paper details the implementation of a system developed to generate 3D motion capture data through the analysis of raster based motion video. The system's general procedure includes acquiring video, processing the raster data to raw motion data through motion tracking technology, formatting the raw data into various useable forms using custom software, importing it to 3D animation software via custom scripts and then applying it to 3D geometry. The purpose of the project is to use the realism and efficiencies that motion capture provides, but without the high cost of traditional motion capture equipment. Though this system may not always provide the resolution or possibility for real time applications that traditional motion capture can, it does allow users to apply real-world motion to virtual objects in an efficient manner.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.