Abstract. In this paper a new video coding by using multiple uncalibrated cameras is proposed. We consider the redundancy between the cameras view points and efficiently compress based on a depth map. Since our target videos are taken with uncalibrated cameras, our depth map is computed not in the real world but in the Projective Space. This is a virtual space defined by projective reconstruction of two still images. This means that the position in the space is correspondence to the depth value. Therefore we do not require full-calibration of the cameras. Generating the depth map requires finding the correspondence between the cameras. We use a "plane sweep" algorithm for it. Our method needs only a depth map except for the original base image and the camera parameters, and it contributes to the effectiveness of compression.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.