Figure 1: a) Combined raw geometries obtained by a calibrated setup of two hybrid color+depth cameras rendered in green and red respectively. b) Result of geometry fusion obtained by our MLS-based approach. c) Textured geometry from a). Note the numerous visual artifacts due to the inaccurate and incomplete geometry, especially near depth discontinuities. d) Our optimized textured geometry from b). AbstractMulti-view reconstruction aims at computing the geometry of a scene observed by a set of cameras. Accurate 3D reconstruction of dynamic scenes is a key component for a large variety of applications, ranging from special effects to telepresence and medical imaging. In this paper we propose a method based on Moving Least Squares surfaces which robustly and efficiently reconstructs dynamic scenes captured by a calibrated set of hybrid color+depth cameras. Our reconstruction provides spatio-temporal consistency and seamlessly fuses color and geometric information. We illustrate our approach on a variety of real sequences and demonstrate that it favorably compares to state-of-the-art methods.
RGBD cameras, such as the Kinect, have recently revolutionized the field of real-time geometry and appearance acquisition. While impressive 3D reconstruction results have been obtained, combining data acquired by multiple RGBD cameras constitutes a technical challenge. Several methods have been proposed to estimate the internal parameters of each RGBD camera (such as depth mapping function and focal length). Despite that the textured geometry obtained by each RGBD camera individually is visually attractive, even stateof-the-art methods have difficulties in correctly combining the textured geometries obtained by several RGBD cameras via a rigid transformation. Based on this observation, our approach registers the RGBD cameras by a smooth field of rigid transformations, instead of a single rigid transformation. Experimental results on challenging data demonstrate the validity of the proposed approach.Index Terms-RGBD cameras, virtual view rendering
Kinect depth maps often contain missing data, or "holes", for various reasons. Most existing Kinect-related research treat these holes as artifacts and try to minimize them as much as possible. In this paper, we advocate a totally different idea -turning Kinect holes into useful information. In particular, we are interested in the unique type of holes that are caused by occlusion of the Kinect's structured light, resulting in shadows and loss of depth acquisition. We propose a robust detection scheme to detect and classify different types of shadows based on their distinct local shadow patterns as determined from geometric analysis, without assumption on object geometry. Experimental results demonstrate that the proposed scheme can achieve very accurate shadow detection. We also demonstrate the usefulness of the extracted shadow information by successfully applying it for automatic foreground segmentation.
The development of commodity depth camera such as Kinect has paved the way for low-cost human body measurement. The existing Kinect-based body measurement methods are too time-consuming since their purposes also include 3D reconstruction. In this paper, we present a Kinect-based simple but fast and effective solution that is tailored only for human body measurement. In particular, our method captures four views of a subject body and constructs a 3D point cloud for each view. The point clouds are then registered and merged to have a full body cloud on which the body measurements are estimated automatically. The experimental results show that our method requires much shorter execution time than the previous works while still producing measurements with comparable quality.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.