This paper presents Rich360, a novel system for creating and viewing a 360° panoramic video obtained from multiple cameras placed on a structured rig. Rich360 provides an as-rich-as-possible 360° viewing experience by effectively resolving two issues that occur in the existing pipeline. First, a deformable spherical projection surface is utilized to minimize the parallax from multiple cameras. The surface is deformed spatio-temporally according to the depth constraints estimated from the overlapping video regions. This enables fast and efficient parallax-free stitching independent of the number of views. Next, a non-uniform spherical ray sampling is performed. The density of the sampling varies depending on the importance of the image region. Finally, for interactive viewing, the non-uniformly sampled video is mapped onto a uniform viewing sphere using a UV map. This approach can preserve the richness of the input videos when the resolution of the final 360° panoramic video is smaller than the overall resolution of the input videos, which is the case for most 360° panoramic videos. We show various results from Rich360 to demonstrate the richness of the output video and the advancement in the stitching results.
While the 3D graphics technique has found its place for the scientific visualization, especially for medical and biological applications, it has long been speculated that the 3D may not be so much effective as far as the conventional information visualization is concerned. We agree upon this view that the naive extension of the 2D visual forms to 3D is not a way to go. Instead, the computergenerated 3D virtual world will serve best when the virtual world is seamlessly integrated with the real 3D space. For an example, a physical automobile model surrounded with various kinds of virtual visual forms such as texts, images, sounds and 3D models will offer the user (or the audience) another level of appreciation and experience on the subject being presented. In this paper, we present our on-going developmental efforts toward the above framework which calls for the tight integration of the 3D visual forms and the 3D real space. The Spatial AR Hologram (SPAROGRAM) is capable of manifesting augmented three-dimensional information by making full use of the real 3D space that encompasses the surroundings of the real object comprehensively and simultaneously. To accomplish this, a multiple layer of stereoscopic images was implemented. Stereoscopic images enable spatial visualization using the physical and virtual third dimension. Furthermore, ensuring the continuity of the spatial experience, we made the use of spatial exploration with user interaction in real-time. We describe the whole process of system design and prototyping. Our initial investigation suggests that the newly conceived holographic display produce not only continuous 3D space perception, but also the better spatial awareness and realism. Furthermore, it is a promising way to present information in three-dimensional display and help the users understand information effectively and efficiently.
Figure 1: Overall process. (a) input sequence, (b) constructed panoramic image, (c) Confidence map, (d)User scribbles, (e) panoramic depth map, (f) output depth sequence.Abstract Accurate depth estimation is a challenging, yet essential step in the conversion of a 2D image sequence to a 3D stereo sequence. We present a novel approach to construct a temporally coherent depth map for each image in a sequence. The quality of the estimated depth is high enough for the purpose of 2D to 3D stereo conversion. Our approach first combines the video sequence into a panoramic image. A user can scribble on this single panoramic image to specify depth information. The depth is then propagated to the remainder of the panoramic image. This depth map is then remapped to the original sequence and used as the initial guess for each individual depth map in the sequence. Our approach greatly simplifies the required user interaction during the assignment of the depth and allows for relatively free camera movement during the generation of a panoramic image. We demonstrate the effectiveness of our method by showing stereo converted sequences with various camera motions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.