With the rapid development of unmanned aerial vehicles (UAVs), UAV-based intelligent airborne surveillance systems represented by real-time ground vehicle speed estimation have attracted wide attention from researchers. However, there are still many challenges in extracting speed information from UAV videos, including the dynamic moving background, small target size, complicated environment, and diverse scenes. In this paper, we propose a novel adaptive framework for multi-vehicle ground speed estimation in airborne videos. Firstly, we build a traffic dataset based on UAV. Then, we use the deep learning detection algorithm to detect the vehicle in the UAV field of view and obtain the trajectory in the image through the tracking-by-detection algorithm. Thereafter, we present a motion compensation method based on homography. This method obtains matching feature points by an optical flow method and eliminates the influence of the detected target to accurately calculate the homography matrix to determine the real motion trajectory in the current frame. Finally, vehicle speed is estimated based on the mapping relationship between the pixel distance and the actual distance. The method regards the actual size of the car as prior information and adaptively recovers the pixel scale by estimating the vehicle size in the image; it then calculates the vehicle speed. In order to evaluate the performance of the proposed system, we carry out a large number of experiments on the AirSim Simulation platform as well as real UAV aerial surveillance experiments. Through quantitative and qualitative analysis of the simulation results and real experiments, we verify that the proposed system has a unique ability to detect, track, and estimate the speed of ground vehicles simultaneously even with a single downward-looking camera. Additionally, the system can obtain effective and accurate speed estimation results, even in various complex scenes.
With the rapid development of Unmanned Aerial Vehicle (UAV) systems, the autonomous landing of a UAV on a moving Unmanned Ground Vehicle (UGV) has received extensive attention as a key technology. At present, this technology is confronted with such problems as operating in GPS-denied environments, a low accuracy of target location, the poor precision of the relative motion estimation, delayed control responses, slow processing speeds, and poor stability. To address these issues, we present a hybrid camera array-based autonomous landing UAV that can land on a moving UGV in a GPS-denied environment. We first built a UAV autonomous landing system with a hybrid camera array comprising a fisheye lens camera and a stereo camera. Then, we integrated a wide Field of View (FOV) and depth imaging for locating the UGV accurately. In addition, we employed a state estimation algorithm based on motion compensation for establishing the motion state of the ground moving UGV, including its actual motion direction and speed. Thereafter, according to the characteristics of the designed system, we derived a nonlinear controller based on the UGV motion state to ensure that the UGV and UAV maintain the same motion state, which allows autonomous landing. Finally, to evaluate the performance of the proposed system, we carried out a large number of simulations in AirSim and conducted real-world experiments. Through the qualitative and quantitative analyses of the experimental results, as well as the analysis of the time performance, we verified that the autonomous landing performance of the system in the GPS-denied environment is effective and robust.
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost.
In recent years, unmanned aerial vehicles (UAVs) have rapidly developed, but the illegal use of UAVs by civilians has resulted in disorder and security risks and has increasingly triggered the community concern and worry. Therefore, the monitoring and recycling of UAVs in key regions are of great significance. This paper presents novel panoramic UAV surveillance and autonomous recycling system that is based on the unique structure-free fisheye camera array and has the capability of real-time UAV detection, 3D localization, tracking, and recycling capacity over a very wide field of view. The main characteristics of this paper include the following: 1) constructing a structure-free camera-array-based panoramic UAV surveillance and recycling system; 2) designing a robust dynamic near-infrared laser-source-based self-calibration algorithm for large-scale arbitrary layout of the camera array; and 3) presenting a set of UAV detection, 3D localization, tracking, and autonomous recycling algorithms based on a fish-eye camera array. The system has been tested in various challenging scenarios, including multiple UAVs with significant appearance and scale changes and even poor weather conditions. The extensive experimental results analyzed both qualitatively and quantitatively, and the analysis of the time performance demonstrate the robustness and effectiveness of the proposed system. In addition, we successfully conduct a recycling and landing experiment with the Parrot Beobop. INDEX TERMS UAV, structure-free fisheye camera array, surveillance and recycling system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.