Augmented Virtual Environments (AVE) or Virtual-Reality Fusion systems fuse dynamic videos with static three-dimensional (3D) models of a virtual environment to provide an optimal solution for visualizing and understanding multichannel surveillance systems. However, texture distortion caused by viewpoint changes in such systems is a critical issue that needs to be addressed. To minimize texture fusion distortion, this paper presents a novel virtual environment system in two phases, offline and online phases, to dynamically fuse multiple surveillance videos with a virtual 3D scene. In the offline phase, a static virtual environment is obtained by performing a 3D photogrammetric reconstruction from the input images of the scene. In the online phase, the virtual environment is augmented by fusing multiple videos through two optional strategies. One strategy is to dynamically map images of different videos onto a 3D model of the virtual environment, and the other is to extract moving objects and represent them as billboards. The system can be used to visualize a 3D environment from any viewpoint augmented by real-time videos. Experiments and user studies in different scenarios demonstrate the superiority of our system.
The Manhattan‐world building is a kind of dominant scene in urban areas. Many existing methods for reconstructing such scenes are either vulnerable to noisy and incomplete data or suffer from high computational complexity. In this paper, we present a novel approach to quickly reconstruct lightweight Manhattan‐world urban building models from images. Our key idea is to reconstruct buildings through the salient feature ‐ corners. Given a set of urban building images, Structure‐from‐Motion and 3D line reconstruction operations are applied first to recover camera poses, sparse point clouds, and line clouds. Then we use orthogonal planes detected from the line cloud to generate corners, which indicate a part of possible buildings. Starting from the corners, we fit cubes to point clouds by optimizing corner parameters and obtain cube representations of corresponding buildings. Finally, a registration step is performed on cube representations to generate more accurate models. Experiment results show that our approach can handle some nasty cases containing noisy and incomplete data, meanwhile, output lightweight polygonal building models with a low time‐consuming.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.