Detecting objects from images captured by Unmanned Aerial Vehicles (UAVs) is a highly demanding task. It is also considered a very challenging task due to the typically cluttered background and diverse dimensions of the foreground targets, especially small object areas that contain only very limited information. Multi-scale representation learning presents a remarkable approach to recognizing small objects. However, this strategy ignores the combination of the sub-parts in an object and also suffers from the background interference in the feature fusion process. To this end, we propose a Fine-grained Target Focusing Network (FiFoNet) which can effectively select a combination of multi-scale features for an object and block background interference, which further revitalizes the differentiability of the multi-scale feature representation. Furthermore, we propose a Global–Local Context Collector (GLCC) to extract global and local contextual information and enhance low-quality representations of small objects. We evaluate the performance of the proposed FiFoNet on the challenging task of object detection in UAV images. A comparison of the experiment results on three datasets, namely VisDrone2019, UAVDT, and our VisDrone_Foggy, demonstrates the effectiveness of FiFoNet, which outperforms the ten baseline and state-of-the-art models with remarkable performance improvements. When deployed on an edge device NVIDIA JETSON XAVIER NX, our FiFoNet only takes about 80 milliseconds to process an drone-captured image.
Benefiting from the advancement of deep neural networks (DNNs), detecting objects from drone-view images has achieved great success in recent years. It is a very challenging task to deploy such DNN-based detectors on drones in real-life applications due to their excessive computational costs and limited onboard computational resources. Large redundant computation exists because existing drone-view detectors infer all inputs with nearly identical computation. Detectors with less complexity can be sufficient for a large portion of inputs, which contain a small number of sparse distributed large-size objects. Therefore, a drone-view detector supporting input-aware inference, i.e., capable of dynamically adapting its architecture to different inputs, is highly desirable. In this work, we present a Dynamic Context Collection Network (DyCC-Net), which can perform input-aware inference by dynamically adapting its structure to inputs of different levels of complexities. DyCC-Net can significantly improve inference efficiency by skipping or executing a context collector conditioned on the complexity of the input images. Furthermore, since the weakly supervised learning strategy for computational resource allocation lacks of supervision, models may execute the computationally-expensive context collector even for easy images to minimize the detection loss. We present a Pseudo-label-based semi-supervised Learning strategy (Pseudo Learning), which uses automatically generated pseudo labels as supervision signals, to determine whether to perform context collector according to the input. Extensive experiment results on VisDrone2021 and UAVDT, show that our DyCC-Net can detect objects in drone-captured images efficiently. The proposed DyCC-Net reduces the inference time of state-of-the-art (SOTA) drone-view detectors by over 30 percent, and DyCC-Net outperforms them by 1.94% in AP75.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.