Autonomous drones will play an essential role in human-machine teaming in future search and rescue (SAR) missions. We present a prototype that finds people fully autonomously in densely occluded forests. In the course of 17 field experiments conducted over various forest types and under different flying conditions, our drone found, in total, 38 of 42 hidden persons. For experiments with predefined flight paths, the average precision was 86%, and we found 30 of 34 cases. For adaptive sampling experiments (where potential findings are double-checked on the basis of initial classification confidences), all eight hidden persons were found, leading to an average precision of 100%, whereas classification confidence was increased on average by 15%. Thermal image processing, classification, and dynamic flight path adaptation are computed on-board in real time and while flying. We show that deep learning–based person classification is unaffected by sparse and error-prone sampling within straight flight path segments. This finding allows search missions to be substantially shortened and reduces the image complexity to 1/10th when compared with previous approaches. The goal of our adaptive online sampling technique is to find people as reliably and quickly as possible, which is essential in time-critical applications, such as SAR. Our drone enables SAR operations in remote areas without stable network coverage, because it transmits to the rescue team only classification results that indicate detections and can thus operate with intermittent minimal-bandwidth connections (e.g., by satellite). Once received, these results can be visually enhanced for interpretation on remote mobile devices.
Synthetic apertures find applications in many fields, such as radar, radio telescopes, microscopy, sonar, ultrasound, LiDAR, and optical imaging. They approximate the signal of a single hypothetical wide aperture sensor with either an array of static small aperture sensors or a single moving small aperture sensor. Common sense in synthetic aperture sampling is that a dense sampling pattern within a wide aperture is required to reconstruct a clear signal. In this article we show that there exists practical limits to both, synthetic aperture size and number of samples for the application of occlusion removal. This leads to an understanding on how to design synthetic aperture sampling patterns and sensors in a most optimal and practically efficient way. We apply our findings to airborne optical sectioning which uses camera drones and synthetic aperture imaging to computationally remove occluding vegetation or trees for inspecting ground surfaces.
Drones are becoming increasingly popular for remote sensing of landscapes in archeology, cultural heritage, forestry, and other disciplines. They are more efficient than airplanes for capturing small areas, of up to several hundred square meters. LiDAR (light detection and ranging) and photogrammetry have been applied together with drones to achieve 3D reconstruction. With airborne optical sectioning (AOS), we present a radically different approach that is based on an old idea: synthetic aperture imaging. Rather than measuring, computing, and rendering 3D point clouds or triangulated 3D meshes, we apply image-based rendering for 3D visualization. In contrast to photogrammetry, AOS does not suffer from inaccurate correspondence matches and long processing times. It is cheaper than LiDAR, delivers surface color information, and has the potential to achieve high sampling resolutions. AOS samples the optical signal of wide synthetic apertures (30-100 m diameter) with unstructured video images recorded from a low-cost camera drone to support optical sectioning by image integration. The wide aperture signal results in a shallow depth of field and consequently in a strong blur of out-of-focus occluders, while images of points in focus remain clearly visible. Shifting focus computationally towards the ground allows optical slicing through dense occluder structures (such as leaves, tree branches, and coniferous trees), and discovery and inspection of concealed artifacts on the surface.
In this article, we describe and validate the first fully automatic parameter optimization for thermal synthetic aperture visualization. It replaces previous manual exploration of the parameter space, which is time consuming and error prone. We prove that the visibility of targets in thermal integral images is proportional to the variance of the targets' image. Since this is invariant to occlusion it represents a suitable objective function for optimization. Our findings have the potential to enable fully autonomous search and recuse operations with camera drones.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.