2020
DOI: 10.1109/lra.2020.2972847
|View full text |Cite
|
Sign up to set email alerts
|

Learning Robust Task Priorities and Gains for Control of Redundant Robots

Abstract: Generating complex movements in redundant robots like humanoids is usually done by means of multitask controllers based on quadratic programming, where a multitude of tasks is organized according to strict or soft priorities. Time-consuming tuning and expertise are required to choose suitable task priorities, and to optimize their gains. Here, we automatically learn the controller configuration (soft and strict task priorities and Convergence Gains), looking for solutions that track a variety of desired task t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 16 publications
(20 citation statements)
references
References 31 publications
0
20
0
Order By: Relevance
“…Each of these trajectories has a different priority which determines how the robot's controller executes the entire movement (Methods): the top priority is given to the center of mass (to avoid falling) and the feet poses (which should not move in our experiments, since we only target double support motions), and the postural trajectories have the lowest priority. The exact hard and soft priorities were found in previous work with a multi-objective stochastic optimizer so that the robot is unlikely to fall but tracks the trajectories as well as possible (5).…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Each of these trajectories has a different priority which determines how the robot's controller executes the entire movement (Methods): the top priority is given to the center of mass (to avoid falling) and the feet poses (which should not move in our experiments, since we only target double support motions), and the postural trajectories have the lowest priority. The exact hard and soft priorities were found in previous work with a multi-objective stochastic optimizer so that the robot is unlikely to fall but tracks the trajectories as well as possible (5).…”
Section: Resultsmentioning
confidence: 99%
“…The whole-body motion of the operator is captured with a motion capture suit (Methods), but it cannot be directly used as reference for the robot because of the difference in kinematics (e.g., joint limits, body dimensions) and dynamics (e.g., mass distribution). The system therefore needs to "retarget" the motion (4,5,6), that is, to compute references that make sense for the robot. To do so, Cartesian references are scaled using a fixed factor that accounts for the size difference between the human and the operator (Methods).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Take the limits of the simulation into account by learning to predict which controllers will transfer well (Koos et al, 2012); Evolve many different solutions with a quality diversity (Cully et al, 2015) or multi-objective algorithm (Penco et al, 2020), then test on the real system to find those that work best on the real robot.…”
Section: Box 1 the Reality Gap Problemmentioning
confidence: 99%