In the last two decades, data and information fusion has experienced significant development due mainly to advances in sensor technology. The sensors provide a continuous flow of data about the environment in which they are deployed, which is received and processed to build a dynamic estimation of the situation. With current technology, it is relatively simple to deploy a set of sensors in a specific geographic area, in order to have highly sensorized spaces. However, to be able to fusion and process the information coming from the data sources of a highly sensorized space, it is necessary to solve certain problems inherent to this type of technology. The challenge is analogous to what we can find in the field of the Internet of Things (IoT). IoT technology is characterized by providing the infrastructure capacity to capture, store, and process a huge amount of heterogeneous sensor data (in most cases, from different manufacturers), in the same way that it occurs in data fusion applications. This work is not simple, mainly due to the fact that there is no standardization of the technologies involved (especially within the communication protocols used by the connectable sensors). The solutions that we can find today are proprietary solutions that imply an important dependence and a high cost. The aim of this paper is to present a new open source platform with capabilities for the collection, management and analysis of a huge amount of heterogeneous sensor data. In addition, this platform allows the use of hardware-agnostic in a highly scalable and cost-effective manner. This platform is called Thinger.io. One of the main characteristics of Thinger.io is the ability to model sensorized environments through a high level language that allows a simple and easy implementation of data fusion applications, as we will show in this paper.
Abstract:In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.
The increasing number of autonomous systems monitoring and controlling visual sensor networks, make it necessary an homogeneous (device-independent), flexible (accessible from various places), and efficient (real-time) access to all their underlying video devices. This paper describes an architecture for camera control and video transmission in a distributed system like existing in a cooperative multiagent video surveillance scenario. The proposed system enables the access to a limited-access resource (video sensors) in an easy, transparent and efficient way both for local and remote processes. It is particularly suitable for Pan-Tilt-Zoom (PTZ) cameras in which a remote control is essential.
This paper deals with the multi-objective definition of video compression and it solving using the NSGA-II algorithm. We define the video compression as a problem including two competing objectives and we try to find a set of near-optimal solutions so called Pareto-optimal solutions instead of a single optimal solution. This will be applied to a new codec that is patent pending, which needs some optimizations before it release. The compression is achieved over a standard video, commonly used for video performance measurement. Also we present the NSGA-II convergence speed and discuss the suitability of MOEAs in this scope.
Abstract. This paper outlines an architecture for multi-camera and multi-modal sensor fusion. We define a high-level architecture in which image sensors like standard color, thermal, and time of flight cameras can be fused with high accuracy location systems based on UWB, Wifi, Bluetooth or RFID technologies. This architecture is specially well-suited for indoor environments, where such heterogeneous sensors usually coexists. The main advantage of such a system is that a combined nonredundant output is provided for all the detected targets. The fused output includes in its simplest form the location of each target, including additional features depending of the sensors involved in the target detection, e.g., location plus thermal information. This way, a surveillance or context-aware system obtains more accurate and complete information than only using one kind of technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.