Teams of mobile robots will play a crucial role in future missions to explore the surfaces of extraterrestrial bodies. Setting up infrastructure and taking scientific samples are expensive tasks when operating in distant, challenging, and unknown environments. In contrast to current single-robot space missions, future heterogeneous robotic teams will increase efficiency via enhanced autonomy and parallelization, improve robustness via functional redundancy, as well as benefit from complementary capabilities of the individual robots. In this article, we present our heterogeneous robotic team, consisting of flying and driving robots that we plan to deploy on scientific sampling demonstration missions at a Moon-analogue site on Mt. Etna, Sicily, Italy in 2021 as part of the ARCHES project. We describe the robots' individual capabilities and their roles in two mission scenarios. We then present components and experiments on important tasks therein: automated task planning, high-level mission control, spectral rock analysis, radio-based localization, collaborative multi-robot 6D SLAM in Moon-analogue and Marslike scenarios, and demonstrations of autonomous sample return.
The here presented flying system uses two pairs of wide-angle stereo cameras and maps a large area of interest in a short amount of time. We present a multicopter system equipped with two pairs of wide-angle stereo cameras and an inertial measurement unit (IMU) for robust visual-inertial navigation and time-efficient omni-directional 3D mapping. The four cameras cover a 240 degree stereo field of view (FOV) vertically, which makes the system also suitable for cramped and confined environments like caves. In our approach, we synthesize eight virtual pinhole cameras from four wide-angle cameras. Each of the resulting four synthesized pinhole stereo systems provides input to an independent visual odometry (VO). Subsequently, the four individual motion estimates are fused with data from an IMU, based on their consistency with the state estimation. We describe the configuration and image processing of the vision system as well as the sensor fusion and mapping pipeline on board the MAV. We demonstrate the robustness of our multi-VO approach for visual-inertial navigation and present results of a 3D-mapping experiment.
We introduce a prototype flying platform for planetary exploration: autonomous robot design for extraterrestrial applications (ARDEA). Communication with unmanned missions beyond Earth orbit suffers from time delay, thus a key criterion for robotic exploration is a robot's ability to perform tasks without human intervention. For autonomous operation, all computations should be done on-board and GlobalNavigation Satellite System (GNSS) should not be relied on for navigation purposes.Given these objectives ARDEA is equipped with two pairs of wide-angle stereo cameras and an inertial measurement unit (IMU) for robust visual-inertial navigation and time-efficient, omni-directional 3D mapping. The four cameras cover a 240 ∘ vertical field of view, enabling the system to operate in confined environments such as caves formed by lava tubes. The captured images are split into several pinhole cameras, which are used for simultaneously running visual odometries. The stereo output is used for simultaneous localization and mapping, 3D map generation and collision-free motion planning. To operate the vehicle efficiently for a variety of missions, ARDEA's capabilities have been modularized into skills which can be assembled to fulfill a mission's objectives. These skills are defined generically so that they are independent of the robot configuration, making the approach suitable for different heterogeneous robotic teams. The diverse skill set also makes the micro aerial vehicle (MAV) useful for any task where autonomous exploration is needed.For example terrestrial search and rescue missions where visual navigation in GNSSdenied indoor environments is crucial, such as partially collapsed man-made structures like buildings or tunnels. We have demonstrated the robustness of our system in indoor and outdoor field tests. K E Y W O R D S aerial robotics, computer vision, exploration, GPS-denied operation, planetary robotics ---This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Key words. Confocal laser scanning microscopy (CLSM), diatom, finite element analysis, photogrammetry lightweight value, scanning electron microscopy (SEM), three-dimensional reconstruction. SummaryExact geometric description, numerical analysis and comparison of microscopic objects such as the frustules of diatoms are of increasing importance in basic research (e.g. functional morphology, taxonomy and biogeochemistry). Similarly, applied research and product development in the fields of lightweight construction and nanotechnology can benefit from machine-readable data of such structures. This paper presents a new method to combine data from scanning electron microscopy and confocal laser scanning microscopy to generate exact three-dimensional models of diatom frustules. We propose a method to obtain a high-quality mesh for subsequent analysis through finite element analysis, for example, for biomechanical research on diatom frustules. A specific lightweight value as a universal tool to describe and compare the biomechanical quality of microscopic objects is introduced.Our approach improves the precision of three-dimensional reconstructions, but the generation of usable finite element meshes from complex three-dimensional data based on microscopic techniques requires either a transformation of grid points into elements or smoothing algorithms. Biomechanical analyses of differently obtained models indicate that more complex three-dimensional reconstructions lead to more realistic results.
In this paper, we present a method for learning and online generalization of maneuvers for quadrotor-type vehicles. The maneuvers are formulated as optimal control problems, which are solved using a general purpose optimal control solver. The solutions are then encoded and generalized with Dynamic Movement Primitives (DMPs). This allows for real-time generalization to new goals and in-flight modifications. An effective method for joining the generalized trajectories is implemented. We present the necessary theoretical background and error analysis of the generalization. The effectiveness of the proposed method is showcased using planar point-to-point and perching maneuvers in simulation and experiment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.