The task of planetary exploration poses many challenges for a robot system, from weight and size constraints to sensors and actuators suitable for extraterrestrial environment conditions. As there is a significant communication delay to other planets, the efficient operation of a robot system requires a high level of autonomy. In this work, we present the Light Weight Rover Unit (LRU), a small and agile rover prototype that we designed for the challenges of planetary exploration. Its locomotion system with individually steered wheels allows for high maneuverability in rough terrain and the application of stereo cameras as its main sensor ensures the applicability to space missions. We implemented software components for self-localization in GPS-denied environments, environment mapping, object search and localization and for the autonomous pickup and assembly of objects with its arm. Additional high-level mission control components facilitate both autonomous behavior and remote monitoring of the system state over a delayed communication link. We successfully demonstrated the autonomous capabilities of our LRU at the SpaceBotCamp challenge, a national robotics contest with focus on autonomous planetary exploration. A robot had to autonomously explore a moon-like rough-terrain environment, locate and collect two objects and assemble them after transport to a third object-which the LRU did on its first try, in half of the time and fully autonomous.
Planetary exploration poses many challenges for a robot system: From weight and size constraints to extraterrestrial environment conditions, which constrain the suitable sensors and actuators. As the distance to other planets introduces a significant communication delay, the efficient operation of a robot system requires a high level of autonomy. In this work, we present our Lightweight Rover Unit (LRU), a small and agile rover prototype that we designed for the challenges of planetary exploration. Its locomotion system with individually steered wheels allows for high maneuverability in rough terrain and stereo cameras as its main sensors ensure the applicability to space missions. We implemented software components for self-localization This work was supported by the Helmholtz Association, project alliance ROBEX (contract number HA-304) and partially funded by the DLR Space Administration. Electronic supplementary materialThe online version of this article (https://doi.org/10.1007/s10846-017-0680-9) contains supplementary material, which is available to authorized users. in GPS-denied environments, autonomous exploration and mapping as well as computer vision, planning and control modules for the autonomous localization, pickup and assembly of objects with its manipulator. Additional high-level mission control components facilitate both autonomous behavior and remote monitoring of the system state over a delayed communication link. We successfully demonstrated the autonomous capabilities of our LRU at the SpaceBotCamp challenge, a national robotics contest with focus on autonomous planetary exploration. A robot had to autonomously explore an unknown Moon-like rough terrain, locate and collect two objects and assemble them after transport to a third object -which the LRU did on its first try, in half of the time and fully autonomously. The next milestone for our ongoing LRU development is an upcoming planetary exploration analogue mission to perform scientific experiments at a Moon analogue site located on a volcano.
Objective: Currently, there are some 95,000 people in Europe suffering from upper-limb impairment. Rehabilitation should be undertaken right after the impairment occurs and should be regularly performed thereafter. Moreover, the rehabilitation process should be tailored specifically to both patient and impairment. Approach: To address this, we have developed a low-cost solution that integrates an off-the-shelf Virtual Reality (VR) setup with our in-house developed arm/hand intent detection system. The resulting system, called VITA, enables an upper-limb disabled person to interact in a virtual world as if her impaired limb were still functional. VITA provides two specific features that we deem essential: proportionality of force control and interactivity between the user and the intent detection core. The usage of relatively cheap commercial components enable VITA to be used in rehabilitation centers, hospitals, or even at home. The applications of VITA range from rehabilitation of patients with musculodegenerative conditions (e.g. ALS), to treating phantom-limb pain of people with limb-loss and prosthetic training. Main Results: We present a multifunctional system for upper-limb rehabilitation in VR. We tested the system using a VR implementation of a standard hand assessment tool, the Box and Block test and performed a user study on this standard test with both intact subjects and a prosthetic user. Furthermore, we present additional applications, showing the versatility of the system. Significance: The VITA system shows the applicability of a combination of our experience in intent detection with state-of-the art VR system for rehabilitation purposes. With VITA, we have an all-purpose experimental tool available, which allows us to quickly and realistically simulate all kind of real-world problems and rehabilitation exercises for upper-limb impaired patients. Additionally, other scenarios such as prostheses simulations and control modes can be quickly implemented and tested.
One of the crucial problems found in the scientific community of assistive/rehabilitation robotics nowadays is that of automatically detecting what a disabled subject (for instance, a hand amputee) wants to do, exactly when she wants to do it, and strictly for the time she wants to do it. This problem, commonly called “intent detection,” has traditionally been tackled using surface electromyography, a technique which suffers from a number of drawbacks, including the changes in the signal induced by sweat and muscle fatigue. With the advent of realistic, physically plausible augmented- and virtual-reality environments for rehabilitation, this approach does not suffice anymore. In this paper, we explore a novel method to solve the problem, which we call Optical Myography (OMG). The idea is to visually inspect the human forearm (or stump) to reconstruct what fingers are moving and to what extent. In a psychophysical experiment involving ten intact subjects, we used visual fiducial markers (AprilTags) and a standard web camera to visualize the deformations of the surface of the forearm, which then were mapped to the intended finger motions. As ground truth, a visual stimulus was used, avoiding the need for finger sensors (force/position sensors, datagloves, etc.). Two machine-learning approaches, a linear and a non-linear one, were comparatively tested in settings of increasing realism. The results indicate an average error in the range of 0.05–0.22 (root mean square error normalized over the signal range), in line with similar results obtained with more mature techniques such as electromyography. If further successfully tested in the large, this approach could lead to vision-based intent detection of amputees, with the main application of letting such disabled persons dexterously and reliably interact in an augmented-/virtual-reality setup.
Given the recent progress in the development of computer vision, it is nowadays possible to optically track features of the human body with unprecedented precision. We take this as a starting point to build a novel human-machine interface for the disabled. In this particular work we explore the possibility of visually inspecting the human forearm to detect what fingers are moving, and to what extent. In particular, in a psychophysical experiment with ten intact subjects, we tracked the deformations of the surface of the forearm to try and reconstruct intended finger motions. Ridge Regression was used for the reconstruction. The results are highly promising, leading to an average error in the range of 0.13 to 0.2 (normalized root mean square error). If further successfully tested in the large, this approach could represent a fully fledged alternative / replacement to similar traditional interfaces such as, e.g., surface electromyography.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.