Humanitarian Crisis scenarios typically require immediate rescue intervention. In many cases, the conditions at a scene may be prohibitive for human rescuers to provide instant aid, because of hazardous, unexpected, and human threatening situations. These scenarios are ideal for autonomous mobile robot systems to assist in searching and even rescuing individuals. In this study, we present a synchronous ground-aerial robot collaboration approach, under which an Unmanned Aerial Vehicle (UAV) and a humanoid robot solve a Search and Rescue scenario locally, without the aid of a commonly used Global Navigation Satellite System (GNSS). Specifically, the UAV uses a combination of Simultaneous Localization and Mapping and OctoMap approaches to extract a 2.5D occupancy grid map of the unknown area in relation to the humanoid robot. The humanoid robot receives a goal position in the created map and executes a path planning algorithm in order to estimate the FootStep navigation trajectory for reaching the goal. As the humanoid robot navigates, it localizes itself in the map while using an adaptive Monte-Carlo Localization algorithm by combining local odometry data with sensor observations from the UAV. Finally, the humanoid robot performs visual human body detection while using camera data through a Darknet pre-trained neural network. The proposed robot collaboration scheme has been tested under a proof of concept setting in an exterior GNSS-denied environment.
Global Navigation Satellite Systems (GNSS) are extensively used for location-based services, civil and military applications, precise time reference, atmosphere sensing, and other applications. In surveying and mapping applications, GNSS provides precise three-dimensional positioning all over the globe, day and night, under almost any weather conditions. The visibility of the ground receiver to GNSS satellites constitutes the main driver of accuracy for GNSS positioning. When this visibility is obstructed by buildings, high vegetation, or steep slopes, the accuracy is degraded and alternative techniques have to be assumed. In this study, a novel concept of using an unmanned aerial system (UAS) as an intermediate means for improving the accuracy of ground positioning in GNSS-denied environments is presented. The higher elevation of the UAS provides a clear-sky visibility line towards the GNSS satellites, thus its accuracy is significantly enhanced with respect to the ground GNSS receiver. Thus, the main endeavor is to transfer the order of accuracy of the GNSS on-board the UAS to the ground. The general architecture of the proposed system includes hardware and software components (i.e., camera, gimbal, range finder) for the automation of the procedure. The integration of the coordinate systems for each payload setting is described, while an error budget analysis is carried out to evaluate and identify the system’s critical elements along with the potential of the proposed method.
Precise coordinate estimation is a fundamental engineering challenge in many mapping applications. Conventional surveying techniques based on global navigation satellite system (GNSS), light detection and ranging (LiDAR), or total stations involve costly equipment and time-consuming methodologies and may suffer from restrictions, such as occlusions and low satellite availability. In this study, a surveying approach, based on a custom-equipped unmanned aerial vehicle (UAV) and ArUco markers distributed in an unknown area as the potential target mapping points, is proposed and evaluated through simulation. The UAV incorporates a real-time kinematic GNSS receiver and a gimbal unit, with a simple camera and an electronic rangefinder module. The system demonstrates a real-time hierarchy targeting system that allows the UAV to engage with the ground targets through the camera, measure corresponding distances, record UAV coordinates, and then perform a multilateration-based target coordinate estimation. To evaluate the flexibility, efficiency, and onboard performance of the proposed target positioning approach, the method was developed as a robotic operating system software package and tested in the Gazebo robotic simulator on an NVIDIA Jetson TX2. Several mapping environments, along with varying flight type scenarios, were created to evaluate the resulting coordinate estimation errors. The achieved positioning accuracy is very promising and demonstrates a possible use for circular flying trajectories. Along these lines, the proposed methodology may pave the way toward a precise surveying alternative, as it may establish the main surveying method for common mapping applications in the near future and even provide coordinate estimations in demanding and, until now, unattainable areas.
Keypoint detection serves as the basis for many computer vision and robotics applications. Despite the fact that colored point clouds can be readily obtained, most existing keypoint detectors extract only geometry-salient keypoints, which can impede the overall performance of systems that intend to (or have the potential to) leverage color information. To promote advances in such systems, we propose an efficient multi-modal keypoint detector that can extract both geometry-salient and color-salient keypoints in colored point clouds. The proposed CEntroid Distance (CED) keypoint detector comprises an intuitive and effective saliency measure, the centroid distance, that can be used in both 3D space and color space, and a multi-modal non-maximum suppression algorithm that can select keypoints with high saliency in two or more modalities. The proposed saliency measure leverages directly the distribution of points in a local neighborhood and does not require normal estimation or eigenvalue decomposition. We evaluate the proposed method in terms of repeatability and computational efficiency (i.e. running time) against state-of-the-art keypoint detectors on both synthetic and real-world datasets. Results demonstrate that our proposed CED keypoint detector requires minimal computational time while attaining high repeatability. To showcase one of the potential applications of the proposed method, we further investigate the task of colored point cloud registration. Results suggest that our proposed CED detector outperforms state-ofthe-art handcrafted and learning-based keypoint detectors in the evaluated scenes. The C++ implementation of the proposed method is made publicly available at https:// github.com/UCR-Robotics/CED_Detector.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.