Unmanned aerial vehicle (UAV) remote sensing and deep learning provide a practical approach to object detection. However, most of the current approaches for processing UAV remote-sensing data cannot carry out object detection in real time for emergencies, such as firefighting. This study proposes a new approach for integrating UAV remote sensing and deep learning for the real-time detection of ground objects. Excavators, which usually threaten pipeline safety, are selected as the target object. A widely used deep-learning algorithm, namely You Only Look Once V3, is first used to train the excavator detection model on a workstation and then deployed on an embedded board that is carried by a UAV. The recall rate of the trained excavator detection model is 99.4%, demonstrating that the trained model has a very high accuracy. Then, the UAV for an excavator detection system (UAV-ED) is further constructed for operational application. UAV-ED is composed of a UAV Control Module, a UAV Module, and a Warning Module. A UAV experiment with different scenarios was conducted to evaluate the performance of the UAV-ED. The whole process from the UAV observation of an excavator to the Warning Module (350 km away from the testing area) receiving the detection results only lasted about 1.15 s. Thus, the UAV-ED system has good performance and would benefit the management of pipeline safety.
Rapid invasion of Spartina alterniflora into Chinese coastal wetlands has attracted much attention. Many field and remote sensing studies have examined the spatio-temporal dynamics of S. alterniflora invasion; however, spatially explicit quantitative analyses of S. alterniflora invasion and its underlying mechanisms at both patch and landscape scales are seldom reported. To fill this knowledge gap, we integrated multi-temporal unmanned aerial vehicle (UAV) imagery, light detection and ranging (LiDAR)-derived elevation data, and tidal and meteorological time series to explore the growth potential (lateral expansion rates and canopy greenness) of S. alterniflora over the intertidal zone in a subtropical coastal wetland (Zhangjiang estuarine wetland, Fujian, China). Our analyses of patch expansion indicated that isolated S. alterniflora patches in this wetland experienced high lateral expansion over the past several years (averaged at 4.28 m/year in patch diameter during 2014–2017), and lateral expansion rates ( y , m/year) showed a statistically significant declining trend with increasing inundation ( x , h/day; 3 ≤ x ≤ 18 ): y = − 0.17 x + 5.91 , R 2 = 0.78 . Our analyses of canopy greenness showed that the seasonality of the growth potential of S. alterniflora was driven by temperature (Pearson correlation coefficient r = 0.76 ) and precipitation ( r = 0.68 ), with the growth potential peaking in early/middle summer with high temperature and adequate precipitation. Together, we concluded that the growth potential of S. alterniflora was co-regulated by tidal and meteorological regimes, in which spatial heterogeneity is controlled by tidal inundation while temporal variation is controlled by both temperature and precipitation. To the best of our knowledge, this is the first spatially explicit quantitative study to examine the influences of tidal and meteorological regimes on both spatial heterogeneity (over the intertidal zone) and temporal variation (intra- and inter-annual) of S. alterniflora at both patch and landscape scales. These findings could serve critical empirical evidence to help answer how coastal salt marshes respond to climate change and assess the vulnerability and resilience of coastal salt marshes to rising sea level. Our UAV-based methodology could be applied to many types of plant community distributions.
Cross-view image matching has attracted extensive attention due to its huge potential applications, such as localization and navigation. Unmanned aerial vehicle (UAV) technology has been developed rapidly in recent years, and people have more opportunities to obtain and use UAV-view images than ever before. However, the algorithms of cross-view image matching between the UAV view (oblique view) and the satellite view (vertical view) are still in their beginning stage, and the matching accuracy is expected to be further improved when applied in real situations. Within this context, in this study, we proposed a cross-view matching method based on location classification (hereinafter referred to LCM), in which the similarity between UAV and satellite views is considered, and we implemented the method with the newest UAV-based geo-localization dataset (University-1652). LCM is able to solve the imbalance of the input sample number between the satellite images and the UAV images. In the training stage, LCM can simplify the retrieval problem into a classification problem and consider the influence of the feature vector size on the matching accuracy. Compared with one study, LCM shows higher accuracies, and Recall@K (K ∈ {1, 5, 10}) and the average precision (AP) were improved by 5–10%. The expansion of satellite-view images and multiple queries proposed by the LCM are capable of improving the matching accuracy during the experiment. In addition, the influences of different feature sizes on the LCM’s accuracy are determined, and we found that 512 is the optimal feature size. Finally, the LCM model trained based on synthetic UAV-view images was evaluated in real-world situations, and the evaluation result shows that it still has satisfactory matching accuracy. The LCM can realize the bidirectional matching between the UAV-view image and the satellite-view image and can contribute to two applications: (i) UAV-view image localization (i.e., predicting the geographic location of UAV-view images based on satellite-view images with geo-tags) and (ii) UAV navigation (i.e., driving the UAV to the region of interest in the satellite-view image based on the flight record).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.