This work integrates visual and physical constraints to perform real-time depth-only tracking of articulated objects, with a focus on tracking a robot's manipulators and manipulation targets in realistic scenarios. As such, we extend DART, an existing visual articulated object tracker, to additionally avoid interpenetration of multiple interacting objects, and to make use of contact information collected via torque sensors or touch sensors. To achieve greater stability, the tracker uses a switching model to detect when an object is stationary relative to the table or relative to the palm and then uses information from multiple frames to converge to an accurate and stable estimate. Deviation from stable states is detected in order to remain robust to failed grasps and dropped objects. The tracker is integrated into a shared autonomy system in which it provides state estimates used by a grasp planner and the controller of two anthropomorphic hands. We demonstrate the advantages and performance of the tracking system in simulation and on a real robot. Qualitative results are also provided for a number of challenging manipulations that are made possible by the speed, accuracy, and stability of the tracking system.
In-hand manipulation with a multifinger hand is defined as changing the object pose from an initial to a final grasp configuration, while maintaining the fingertip contacts on the object surface. Given only the task constraints, represented as a desired motion of the object and an external force to be applied or resisted, the problem can be expressed as finding a good set of contact points on the object and a corresponding hand configuration compatible with the task to be executed. This paper presents a method for solving such problem, taking into account the kinematic structure and torque limits of the hand, the force closure condition (which must be guaranteed during the whole trajectory), and task compatibility. The feasibility of such method is tested in simulation of 2D and 3D examples.
The growth of space debris is becoming a severe issue that urgently requires mitigation measures based on maintenance, repair, and de-orbiting technologies. Such on-orbit servicing (OOS) missions, however, are delicate and expensive. Virtual Reality (VR) enables the simulation and training in a flexible and safe environment, and hence has the potential to drastically reduce costs and time, while increasing the success rate of future OOS missions. This paper presents a highly immersive VR system with which satellite maintenance procedures can be simulated interactively using visual and haptic feedback. The system can be used for verification and training purposes for human and robot systems interacting in space. Our framework combines unique realistic virtual reality simulation engines with advanced immersive interaction devices. The DLR bimanual haptic device HUG is used as the main user interface. The HUG is equipped with two light-weight robot arms and is able to provide realistic haptic feedback on both human arms. Additional devices provide vibrotactile and electrotactile feedback at the elbow and the fingertips. A particularity of the realtime simulation is the fusion of the Bullet physics engine with our haptic rendering algorithm, which is an enhanced version of the Voxmap-Pointshell Algorithm. Our haptic rendering engine supports multiple objects in the scene and is able to compute collisions for each of them within 1 msec, enabling realistic virtual manipulation tasks even for stiff collision configurations. The visualization engine ViSTA is used during the simulation to achieve photo-realistic effects, increasing the immersion. In order to provide a realistic experience at interactive frame rates, we developed a distributed system architecture, where the load of computing the physics simulation, haptic feedback and visualization of a complex scene is transferred to dedicated machines. The implementations are presented in detail and the performance of the overall system is validated. Additionally, a preliminary user study in which the virtual system is compared to a physical test bed shows the suitability of the VR-OOS framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.