Teleoperation of heavy machinery in industry often requires operators to be in close proximity to the plant and issue commands on a per-actuator level using joystick input devices. However, this is non-intuitive and makes achieving desired job properties a challenging task requiring operators to complete extensive and costly training. Despite this, operator fatigue is common with implications for personal safety, project timeliness, cost, and quality. While full automation is not yet achievable due to unpredictability and the dynamic nature of the environment and task, shared control paradigms allow operators to issue high-level commands in an intuitive, taskinformed control space while having the robot optimize for achieving desired job properties.In this paper, we compare a number of modes of teleoperation, exploring both the number of dimensions of the control input as well as the most intuitive control spaces. Our experimental evaluations of the performance metrics were based on quantifying the difficulty of tasks based on the well known Fitts' law as well as a measure of how well constraints affecting the task performance were met. Our experiments show that higher performance is achieved when humans submit commands in low-dimensional task spaces as opposed to joint space manipulations.
This work presents a fully integrated system for reliable grasping and manipulation using dense visual mapping, collision-free motion planning, and shared autonomy. The particular motion sequences are composed automatically based on high-level objectives provided by a human operator, with continuous scene monitoring during execution automatically detecting and adapting to dynamic changes of the environment. The system is able to automatically recover from a variety of disturbances and fall back to the operator if stuck or if it cannot otherwise guarantee safety. Furthermore, the operator can take control at any time and then resume autonomous operation. Our system is flexible to be adapted to new robotic systems, and we demonstrate our work on two real-world platformsfixed and floating base -in shared workspace scenarios.To the best of our knowledge, this work is also the first to employ the inverse Dynamic Reachability Map for realtime, optimized mobile base positioning to maximize workspace manipulability reducing cycle time and increasing planning and autonomy robustness.
Many robotic tasks require human interaction through teleoperation to achieve high performance. However, in industrial applications these methods often require high levels of concentration and manual dexterity leading to high cognitive loads and dangerous working conditions. Shared autonomy attempts to address these issues by blending human and autonomous reasoning, relieving the burden of precise motor control, tracking, and localization. In this paper we propose an optimization-based representation for shared autonomy in dynamic environments. We ensure real-time tractability by modulating the human input with the information of the changing environment in the same task space, instead of adding it to the optimization cost or constraints. We illustrate the method with two real world applications: grasping objects in a cluttered environment, and a spraying task requiring sprayed linings with greater homogeneity. Finally we use a 7 degree of freedom KUKA LWR arm to simulate the grasping and spraying experiments.
Performing a number of motion patterns -referred to as skills -(e.g., wave, spiral, sweeping motions) during teleoperation is an integral part of many industrial processes such as spraying, welding, and wiping (cleaning, polishing). Maintaining these motions whilst simultaneously avoiding obstacles and traversing complex terrain requires expert operators. In this work, we propose a novel skill-based shared control framework for incorporating the notion of skill assistance to aid novice operators to sustain these motion patterns whilst adhering to environmental constraints. Our shared control method uses streaming joystick data to estimate the model parameters that provide a description of the operator's intention. We introduce a novel parametrization for state and control that combines skill and underlying trajectory models, leveraging a special type of curve known as Clothoids. This new parameterization allows for efficient computation of skill-based short term horizon plans, enabling the use of a Model Predictive Control (MPC) loop. We perform experiments on a hardware mock-up, validating the effectiveness of our method to recognize a switch of intended skill, and showing an improved quality of output motion, even under dynamically changing obstacles. See our accompanying video here: https://youtu.be/TwhsgA6fw6M.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.