This paper presents Rich360, a novel system for creating and viewing a 360° panoramic video obtained from multiple cameras placed on a structured rig. Rich360 provides an as-rich-as-possible 360° viewing experience by effectively resolving two issues that occur in the existing pipeline. First, a deformable spherical projection surface is utilized to minimize the parallax from multiple cameras. The surface is deformed spatio-temporally according to the depth constraints estimated from the overlapping video regions. This enables fast and efficient parallax-free stitching independent of the number of views. Next, a non-uniform spherical ray sampling is performed. The density of the sampling varies depending on the importance of the image region. Finally, for interactive viewing, the non-uniformly sampled video is mapped onto a uniform viewing sphere using a UV map. This approach can preserve the richness of the input videos when the resolution of the final 360° panoramic video is smaller than the overall resolution of the input videos, which is the case for most 360° panoramic videos. We show various results from Rich360 to demonstrate the richness of the output video and the advancement in the stitching results.
We present a hybrid approach that segments an object by using both color and depth information obtained from views captured from a low-cost RGBD camera and sparsely-located color cameras. Our system begins with generating dense depth information of each target image by using Structure from Motion and Joint Bilateral Upsampling. We formulate the multi-view object segmentation as the Markov Random Field energy optimization on the graph constructed from the superpixels. To ensure inter-view consistency of the segmentation results between color images that have too few color features, our local mapping method generates dense inter-view geometric correspondences by using the dense depth images. Finally, the pixel-based optimization step refines the boundaries of the results obtained from the superpixel-based binary segmentation. We evaluate the validity of our method under various capture conditions such as numbers of views, rotations, and distances between cameras. We compared our method with the state-of-the-art methods that use the standard multi-view datasets. The comparison verified that the proposed method works very efficiently especially in a sparse wide-baseline capture environment.
High-quality depth painting for each object in a scene is a challenging task in 2D to 3D stereo conversion. One way to accurately estimate the varying depth within the object in an image is to utilize existing 3D models. Automatic pose estimation approaches based on 2D-3D feature correspondences have been proposed to obtain depth from a given 3D model. However, when the 3D model is not identical to the target object, previous methods often produce erroneous depth in the vicinity of the silhouette of the object. This paper introduces a novel 3D model-based depth estimation method that effectively produces high-quality depth information for rigid objects in a stereo conversion workflow. Given an exemplar 3D model and user correspondences, our method generates detailed depth of an object by optimizing the initial depth obtained by the application of structural fitting and silhouette matching in the image domain. The final depth is accurate up to the given 3D model, while consistent with the image. Our method was applied to various image sequences containing objects with different appearances and varying poses. The experiments show that our method can generate plausible depth information that can be utilized for high-quality 2D to 3D stereo conversion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.