Since 2004, NASA has been working to return to the Moon. In contrast to the Apollo missions, two key objectives of the current exploration program is to establish significant infrastructure and an outpost. Achieving these objectives will enable long-duration stays and long-distance exploration of the Moon. To do this, robotic systems will be needed to perform tasks which cannot, or should not, be performed by crew alone. In this paper, we summarize our work to develop "utility robots" for lunar surface operations, present results and lessons learned from field testing, and discuss directions for future research.
To command a rover to go to a location of scientific interest on a remote planet, the rover must be capable of reliably tracking the target designated by a scientist from about ten rover lengths away. The rover must maintain lock on the target while traversing rough terrain and avoiding obstacles without the need for communication with Earth. Among the challenges of tracking targets from a rover are the large changes in the appearance and shape of the selected target as the rover approaches it, the limited frame rate at which images can he acquired and processed, and the sudden changes in camera pointing as the rover goes over rocky ternin. We have investigated various techniques for combining 2D and 3D information in order to increase the reliability of visually tracking targets under Mars like conditions. We will present the approaches that we have examined on simulated data and tested onboard the Rocky 8 rover in the JPL Mars Yard and the K9 rover in the ARC Marscape. These techniques include results for 2D trackers, ICP, visual odomew, and 2D/3D trackers.
Future planetary rover missions, such as the upcoming Mars Science Laboratory, will require rovers to autonomously navigate to science targets specified from up to 10 meters away, and to place instruments against these targets with up to 1 centimeter precision. The current state of the art, demonstrated by the Mars Exploration Rover (MER) mission, typically requires three sols (Martian days) for approach and placement, with several communication cycles between the rovers and ground operations. The capability for goal level commanding of a rover to visit multiple science targets in a single sol represents a tenfold increase in productivity, and decreases daily operations costs. Such a capability requires a high degree of robotic autonomy: visual target tracking and navigation for the rover to approach the targets, mission planning for determining the most beneficial course of action given a large set of desired goals in the face of uncertainty, and robust execution for coping with variations in time and power consumption, as well as the possibility of failures in tracking or navigation due to occlusion or unexpected obstacles. We have developed a system that provides these features. The system uses a vision-based target tracker that recovers the 6-DOF transformations between the rover and the tracked targets as the rover moves, and an off-board planner that creates plans that are carried out on an on-board robust executive. The tracker comprises a feature based approach that tracks a set of interest points in 3-D using stereo, with a shape based approach that registers dense 3-D meshes. The off-board planner, in addition to generating a primary activity sequence, creates a large set of contingent, or alternate plans to deal with anticipated failures in tracking and the uncertainty in resource consumption. This paper describes our tracking and planning systems, including the results of experiments carried out using the K9 rover. These systems are part of a larger effort, which includes tools for target specification in 3-D, ground-based simulation and plan verification, round-trip data tracking, rover software and hardware, and scientific visualization. The complete system has been shown to provide the capability of multiple instrument placements on rocks within a 10 meter radius, all within a single command cycle.
This paper introduces an advanced rover localization system suitable for autonomous planetary exploration in the absence of Global Positioning System (GPS) infrastructure. Given an existing terrain map (image and elevation) obtained from satellite imagery and the images provided by the rover stereo camera system, the proposed method determines the best rover location through visual odometry, 3D terrain and horizon matching. The system is tested on data retrieved from a 3 km traverse of the Basalt Hills quarry in California where the GPS track is used as ground truth. Experimental results show the system presented here reduces by over 60% the localization error obtained by wheel odometry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.