Aerial or satellite images are conventionally used for geospatial data collection. However, unmanned aerial vehicles (UAVs) are emerging as a suitable technology for providing very high spatial and temporal resolution data at a low cost. This paper aims to show the potential of using UAVs for map creation and updating. The whole workflow is introduced in the paper, using a case study in Rwanda, where 954 images were collected with a DJI Phantom 2 Vision Plus quadcopter. An orthophoto covering 0.095 km 2 with a spatial resolution of 3.3 cm was produced and used to extract features with a sub-decimetre accuracy. Quantitative and qualitative control of the UAV data products were performed, indicating that the obtained accuracies comply to international standards. Moreover, possible problems and further perspectives were also discussed. The results demonstrate that UAVs provide promising opportunities to create highresolution and highly accurate orthophotos, thus facilitating map creation and updating.
There is a growing demand for cheap and fast cadastral mapping methods to face the challenge of 70% global unregistered land rights. As traditional on-site field surveying is time-consuming and labor intensive, imagery-based cadastral mapping has in recent years been advocated by fit-for-purpose (FFP) land administration. However, owing to the semantic gap between the high-level cadastral boundary concept and low-level visual cues in the imagery, improving the accuracy of automatic boundary delineation remains a major challenge. In this research, we use imageries acquired by Unmanned Aerial Vehicles (UAV) to explore the potential of deep Fully Convolutional Networks (FCNs) for cadastral boundary detection in urban and semi-urban areas. We test the performance of FCNs against other state-of-the-art techniques, including Multi-Resolution Segmentation (MRS) and Globalized Probability of Boundary (gPb) in two case study sites in Rwanda. Experimental results show that FCNs outperformed MRS and gPb in both study areas and achieved an average accuracy of 0.79 in precision, 0.37 in recall and 0.50 in F-score. In conclusion, FCNs are able to effectively extract cadastral boundaries, especially when a large proportion of cadastral boundaries are visible. This automated method could minimize manual digitization and reduce field work, thus facilitating the current cadastral mapping and updating practices.
Cadastral boundaries are often demarcated by objects that are visible in remote sensing imagery. Indirect surveying relies on the delineation of visible parcel boundaries from such images. Despite advances in automated detection and localization of objects from images, indirect surveying is rarely automated and relies on manual on-screen delineation. We have previously introduced a boundary delineation workflow, comprising image segmentation, boundary classification and interactive delineation that we applied on Unmanned Aerial Vehicle (UAV) data to delineate roads. In this study, we improve each of these steps. For image segmentation, we remove the need to reduce the image resolution and we limit over-segmentation by reducing the number of segment lines by 80% through filtering. For boundary classification, we show how Convolutional Neural Networks (CNN) can be used for boundary line classification, thereby eliminating the previous need for Random Forest (RF) feature generation and thus achieving 71% accuracy. For interactive delineation, we develop additional and more intuitive delineation functionalities that cover more application cases. We test our approach on more varied and larger data sets by applying it to UAV and aerial imagery of 0.02-0.25 m resolution from Kenya, Rwanda and Ethiopia. We show that it is more effective in terms of clicks and time compared to manual delineation for parcels surrounded by visible boundaries. Strongest advantages are obtained for rural scenes delineated from aerial imagery, where the delineation effort per parcel requires 38% less time and 80% fewer clicks compared to manual delineation.
ABSTRACT:Within the past years, the development of high-quality Inertial Measurement Units (IMU) and GNSS technology and dedicated RTK (Real Time Kinematic) and PPK (Post-Processing Kinematic) solutions for UAVs promise accurate measurements of the exterior orientation (EO) parameters which allow to georeference the images. Whereas the positive impact of known precise GNSS coordinates of camera positions is already well studied, the influence of the angular observations have not been studied in depth so far. Challenges include accuracies of GNSS/IMU observations, excessive angular motion and time synchronization problems during the flight. Thus, this study assesses the final geometric accuracy using direct georeferencing with high-quality post-processed IMU/GNSS and PPK corrections. A comparison of different data processing scenarios including indirect georeferencing, integrated solutions as well as direct georeferencing provides guidance on the workability of UAV mapping approaches that require a high level of positional accuracy. In the current research the results show, that the use of the post-processed APX-15 GNSS and IMU data was particularly beneficial to enhance the image orientation quality. Horizontal accuracies within the pixel level (2.8cm) could be achieved. However, it was also shown, that the angular EO parameters are still too inaccurate to be assigned with a high weight during the image orientation process. Furthermore, detailed investigations of the EO parameters unveil that systematic sensor misalignments and offsets of the image block can be reduced by the introduction of four GCPs. In this regard, the use of PPK corrections reduces the time consuming field work to measure high quantities of GCPs and makes large-scale UAV mapping a more feasible solution for practitioners that require high geometric accuracies.
and regional planning and Geo-information Management (itc-pGM), university of twente, enschede, the netherlands;
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.