The timely and efficient generation of detailed damage maps is of fundamental importance following disaster events to speed up first responders’ (FR) rescue activities and help trapped victims. Several works dealing with the automated detection of building damages have been published in the last decade. The increasingly widespread availability of inexpensive UAV platforms has also driven their recent adoption for rescue operations (i.e., search and rescue). Their deployment, however, remains largely limited to visual image inspection by skilled operators, limiting their applicability in time-constrained real conditions. This paper proposes a new solution to autonomously map building damages with a commercial UAV in near real-time. The solution integrates different components that allow the live streaming of the images on a laptop and their processing on the fly. Advanced photogrammetric techniques and deep learning algorithms are combined to deliver a true-orthophoto showing the position of building damages, which are already processed by the time the UAV returns to base. These algorithms have been customized to deliver fast results, fulfilling the near real-time requirements. The complete solution has been tested in different conditions, and received positive feedback by the FR involved in the EU funded project INACHUS. Two realistic pilot tests are described in the paper. The achieved results show the great potential of the presented approach, how close the proposed solution is to FR’ expectations, and where more work is still needed.
Unmanned Aerial Vehicles (UAVs) for 3D indoor mapping applications are often equipped with bulky and expensive sensors, such as LIDAR (Light Detection and Ranging) or depth cameras. The same task could be also performed by inexpensive RGB cameras installed on light and small platforms that are more agile to move in confined spaces, such as during emergencies. However, this task is still challenging because of the absence of a GNSS (Global Navigation Satellite System) signal that limits the localization (and scaling) of the UAV. The reduced density of points in feature-based monocular SLAM (Simultaneous Localization and Mapping) then limits the completeness of the delivered maps. In this paper, the real-time capabilities of a commercial, inexpensive UAV (DJI Tello) for indoor mapping are investigated. The work aims to assess its suitability for quick mapping in emergency conditions to support First Responders (FR) during rescue operations in collapsed buildings. The proposed solution only uses images in input and integrates SLAM and CNN-based (Convolutional Neural Networks) Single Image Depth Estimation (SIDE) algorithms to densify and scale the data and to deliver a map of the environment suitable for real-time exploration. The implemented algorithms, the training strategy of the network, and the first tests on the main elements of the proposed methodology are reported in detail. The results achieved in real indoor environments are also presented, demonstrating performances that are compatible with FRs’ requirements to explore indoor volumes before entering the building.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.