Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Object detection in remotely sensed satellite pictures is fundamental in many fields such as biophysical and environmental monitoring. While deep learning algorithms are constantly evolving, they have been mostly implemented and tested on popular ground-taken photos. This paper critically evaluates and compares a suite of advanced object detection algorithms customized for the task of identifying aircraft within satellite imagery. The goal is to enable researchers to choose efficiently from algorithms that are trainable and usable in real time on a deep learning infrastructure with moderate requirements. Using the large HRPlanesV2 dataset, together with rigorous validation with the GDIT dataset, this research encompasses an array of methodologies including YOLO versions 5, 8, and 10, Faster RCNN, CenterNet, RetinaNet, RTMDet, DETR, and grounding DINO, all trained from scratch. This exhaustive training and validation study reveals YOLOv5 as the pre-eminent model for the specific case of identifying airplanes from remote sensing data, showcasing high precision and adaptability across diverse imaging conditions. This research highlights the nuanced performance landscapes of these algorithms, with YOLOv5 emerging as a robust solution for aerial object detection, underlining its importance through superior mean average precision, recall, and intersection over union scores. The findings described here underscore the fundamental role of algorithm selection aligned with the specific demands of satellite imagery analysis and extend a comprehensive framework to evaluate model efficacy. This aims to foster exploration and innovation in the realm of remote sensing object detection, paving the way for improved satellite imagery applications.
Object detection in remotely sensed satellite pictures is fundamental in many fields such as biophysical and environmental monitoring. While deep learning algorithms are constantly evolving, they have been mostly implemented and tested on popular ground-taken photos. This paper critically evaluates and compares a suite of advanced object detection algorithms customized for the task of identifying aircraft within satellite imagery. The goal is to enable researchers to choose efficiently from algorithms that are trainable and usable in real time on a deep learning infrastructure with moderate requirements. Using the large HRPlanesV2 dataset, together with rigorous validation with the GDIT dataset, this research encompasses an array of methodologies including YOLO versions 5, 8, and 10, Faster RCNN, CenterNet, RetinaNet, RTMDet, DETR, and grounding DINO, all trained from scratch. This exhaustive training and validation study reveals YOLOv5 as the pre-eminent model for the specific case of identifying airplanes from remote sensing data, showcasing high precision and adaptability across diverse imaging conditions. This research highlights the nuanced performance landscapes of these algorithms, with YOLOv5 emerging as a robust solution for aerial object detection, underlining its importance through superior mean average precision, recall, and intersection over union scores. The findings described here underscore the fundamental role of algorithm selection aligned with the specific demands of satellite imagery analysis and extend a comprehensive framework to evaluate model efficacy. This aims to foster exploration and innovation in the realm of remote sensing object detection, paving the way for improved satellite imagery applications.
Understanding geometric and biophysical characteristics is essential for determining grapevine vigor and improving input management and automation in viticulture. This study compares point cloud data obtained from a Terrestrial Laser Scanner (TLS) and various UAV sensors including multispectral, panchromatic, Thermal Infrared (TIR), RGB, and LiDAR data, to estimate geometric parameters of grapevines. Descriptive statistics, linear correlations, significance using the F-test of overall significance, and box plots were used for analysis. The results indicate that 3D point clouds from these sensors can accurately estimate maximum grapevine height, projected area, and volume, though with varying degrees of accuracy. The TLS data showed the highest correlation with grapevine height (r = 0.95, p < 0.001; R2 = 0.90; RMSE = 0.027 m), while point cloud data from panchromatic, RGB, and multispectral sensors also performed well, closely matching TLS and measured values (r > 0.83, p < 0.001; R2 > 0.70; RMSE < 0.084 m). In contrast, TIR point cloud data performed poorly in estimating grapevine height (r = 0.76, p < 0.001; R2 = 0.58; RMSE = 0.147 m) and projected area (r = 0.82, p < 0.001; R2 = 0.66; RMSE = 0.165 m). The greater variability observed in projected area and volume from UAV sensors is related to the low point density associated with spatial resolution. These findings are valuable for both researchers and winegrowers, as they support the optimization of TLS and UAV sensors for precision viticulture, providing a basis for further research and helping farmers select appropriate technologies for crop monitoring.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.