With the continuous progress of remote sensing image object detection tasks in recent years, researchers in this field have gradually shifted the focus of their research from horizontal object detection to the study of object detection in arbitrary directions. It is worth noting that some properties are different from horizontal object detection during oriented object detection that researchers have yet to notice much. This article presents the design of a straightforward and efficient arbitrary-oriented detection system, leveraging the inherent properties of the orientation task, including the rotation angle and box aspect ratio. In the detection of low aspect ratio objects, the angle is of little importance to the orientation bounding box, and it is even difficult to define the angle information in extreme categories. Conversely, in the detection of objects with high aspect ratios, the angle information plays a crucial role and can have a decisive impact on the quality of the detection results. By exploiting the aspect ratio of different targets, this letter proposes a ratio-balanced angle loss that allows the model to make a better trade-off between low-aspect ratio objects and high-aspect ratio objects. The rotation angle of each oriented object, which we naturally embed into a two-dimensional Euclidean space for regression, thus avoids an overly redundant design and preserving the topological properties of the circular space. The performance of the UCAS-AOD, HRSC2016, and DLR-3K datasets show that the proposed model in this paper achieves a leading level in terms of both accuracy and speed.
In recent years, deep learning methods have achieved great success for vehicle detection tasks in aerial imagery. However, most existing methods focus only on extracting latent vehicle target features, and rarely consider the scene context as vital prior knowledge. In this letter, we propose a scene context attention-based fusion network (SCAF-Net), to fuse the scene context of vehicles into an end-to-end vehicle detection network. First, we propose a novel strategy, patch cover, to keep the original target and scene context information in raw aerial images of a large scale as much as possible. Next, we use an improved YOLO-v3 network as one branch of SCAF-Net, to generate vehicle candidates on each patch. Here, a novel branch for the scene context is utilized to extract the latent scene context of vehicles on each patch without any extra annotations. Then, these two branches above are concatenated together as a fusion network, and we apply an attention-based model to further extract vehicle candidates of each local scene. Finally, all vehicle candidates of different patches, are merged by global nonmax suppress (g-NMS) to output the detection result of the whole original image. Experimental results demonstrate that our proposed method outperforms the comparison methods with both high detection accuracy and speed. Our code is released at https://github.com/minghuicode/SCAF-Net.
With the continuous progress of remote sensing image object detection tasks in recent years, researchers in this field have gradually shifted the focus of their research from horizontal object detection to the study of object detection in arbitrary directions. It is worth noting that some properties are different from the horizontal object detection during oriented object detection that researchers have yet to notice much. This article presents the design of a straightforward and efficient arbitrary-oriented detection system, leveraging the inherent properties of the orientation task, including the rotation angle and box aspect ratio. In the detection of low aspect ratio objects, the angle is of little importance to the orientation bounding box, and it is even difficult to define the angle information in extreme categories. Conversely, in the detection of objects with high aspect ratios, the angle information plays a crucial role and can have a decisive impact on the quality of the detection results. By exploiting the aspect ratio of different targets, this letter proposes a ratio-balanced angle loss that allows the model to make a better trade-off between low-aspect ratio objects and high-aspect ratio objects. The rotation angle of each oriented object, which we naturally embed into a two-dimensional Euclidean space for regression, thus avoiding an overly redundant design and preserving the topological properties of the circular space. The performance of the UCAS-AOD, HRSC2016, and DLR-3K datasets show that the proposed model in this paper achieves a leading level in terms of both accuracy and speed. The code is released at https://github.com/minghuicode/Periodic-Pseudo-Domain.
Object search is a challenging yet important task. Many efforts have been made to address this issue and achieve great progress in natural image, yet searching all specified types of objects from remote sensing image is barely studied. In this work, we are interested in searching objects from remote sensing images. Compared to person search in natural scenes, this task is challenging in two factors: One is that remote image usually contains a large number of objects, which poses a great challenge to characterize the object features. Another is that the objects in remote sensing images are dense, which easily yields erroneous localization. To address these issues, we propose a new end-toend deep learning framework for object search in remote sensing images. First, we propose a multi-scale feature aggregation (MSFA) module, which strengthens the representation of lowlevel features by fusing multi-layer features. The fused features with richer details significantly improve the accuracy of object search. Second, we propose a dual-attention object enhancement (DAOE) module to enhance features from channel and spatial dimensions. The enhanced features significantly improve the localization accuracy for dense objects. Finally, we built two challenging datasets based on the remote sensing images, which contains complex changes in space and time. The experiments and comparisons demonstrate the state-of-the-art performance of our method on the challenging datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.