Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain largely unchanged. We present a new method to quickly and efficiently calibrate all the cameras to the site map, making the system viable for large scale commercial deployments. The method uses line feature correspondences, which enable easy feature selection and provide a built-in precision metric to improve calibration accuracy.
No abstract
This paper presents a fast forensic video events analysis and retrieval system in a geospatial framework. Starting from tracking targets and analyzing video streams from distributed camera networks, the system generates video tracking metadata for each video, maps and fuses them in a uniform geospatial coordinate. The combined metadata is saved into spatial database where target trajectories are represented in geometry and geography data type. Powered by spatial functions of database, various video events such as crossing a line, entering an area, loitering and meeting, are detected by executing stored procedures that we have implemented. Geographic information system(GIS) data of TigerLine 1 and GeoNames 2 are integrated with this system to provide contextual information for more advanced forensic queries. A semantic data mining system is also attached to generate text descriptions of events and scene contextual information. The NASA World Wind 3 is the geobrowser used to submit queries and visualize result. The main contribution of this system is that it initiates in running video event retrieval using geospatial computing techniques. This interdisciplinary combination makes this system scalable and manageable for large amount of video data from distributed cameras. It also makes the online video search possible by filtering tremendous amount of data efficiently using geospatial index techniques. From the application point of view, it extends the frontier of geospatial application by presenting a forward-looking application model.
We introduce a distributed augmented reality framework for aerial video which uses CPU/GPU acceleration to correct sensor metadata errors, create a geo-referenced scene model registered to the video, overlay important data, and stream to multiple web clients in order to improve situational awareness during real-time missions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.