Event-based cameras have recently drawn the attention of the Computer Vision community thanks to their advantages in terms of high temporal resolution, low power consumption and high dynamic range, compared to traditional frame-based cameras. These properties make event-based cameras an ideal choice for autonomous vehicles, robot navigation or UAV vision, among others. However, the accuracy of event-based object classification algorithms, which is of crucial importance for any reliable system working in real-world conditions, is still far behind their framebased counterparts. Two main reasons for this performance gap are: 1. The lack of effective low-level representations and architectures for event-based object classification and 2. The absence of large real-world event-based datasets. In this paper we address both problems. First, we introduce a novel event-based feature representation together with a new machine learning architecture. Compared to previous approaches, we use local memory units to efficiently leverage past temporal information and build a robust eventbased representation. Second, we release the first large real-world event-based dataset for object classification. We compare our method to the state-of-the-art with extensive experiments, showing better classification performance and real-time computation.
We propose a learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions. Event-based cameras offer high temporal resolution, power efficiency, and high dynamic range. However, the properties of event-based data are very different compared to standard intensity images, and simple extensions of corner detection methods designed for these images do not perform well on event-based data. We first introduce an efficient way to compute a time surface that is invariant to the speed of the objects. We then show that we can train a Random Forest to recognize events generated by a moving corner from our time surface. Random Forests are also extremely efficient, and therefore a good choice to deal with the high capture frequency of event-based cameras-our implementation processes up to 1.6Mev/s on a single CPU. Thanks to our time surface formulation and this learning approach, our method is significantly more robust to abrupt changes of direction of the corners compared to previous ones. Our method also naturally assigns a confidence score for the corners, which can be useful for postprocessing. Moreover, we introduce a highresolution dataset suitable for quantitative evaluation and comparison of corner detection methods for event-based cameras. We call our approach SILC, for Speed Invariant Learned Corners, and compare it to the state-of-the-art with extensive experiments, showing better performance.
Nowadays, the amount of video data acquired for observation or surveillance applications is overwhelming. Due to these huge volumes of video data, focusing the attention of operators on "areas of interest" requires change detection algorithms. In the particular task of aerial observation, camera motion and viewpoint differences introduce parallax effects, which may substantially affect the reliability and the efficiency of automatic change detection.In this paper, we introduce a novel approach for change detection that considers the geometric aspects of camera sensors as well as the statistical properties of changes. Indeed, our method is based on optical flow matching, constrained by the epipolar geometry, and combined with a statistical change decision criterion. The good performance of our method is demonstrated through our new public Aerial Imagery Change Detection (AICD) dataset of labeled aerial images.
Aerial image change detection is highly dependent on the accuracy of camera pose and may be subject to false alarms caused by misregistrations. In this paper, we present a novel pose estimation approach based on Visual Servoing that combines aerial videos with 3D models.Firstly, we introduce a formulation that relates image registration with the poses of a moving camera observing a 3D plane. Then, we combine this formulation with Newton's algorithm in order to estimate camera poses in a given aerial video. Finally, we present and discuss experimental results which demonstrate the robustness and the accuracy of our method.
With the growing capacity of video devices, human operators are nowadays overwhelmed by the huge volumes of data generated in different applications including surveillance. Therefore, automatic video processing techniques are required in order to filter out uninteresting data and to focus the attention of operators. However, reliability is still a challenging problem.In this paper, we show how spatio-temporal redundancy may be exploited to enhance the accuracy of automatic change detection in aerial videos. More precisely, we present an algorithm based on Belief Propagation in order to improve spatio-temporal consistency between successive change detection results. Experiments demonstrate that our method leads to increased accuracy in change detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.