2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids) 2018
DOI: 10.1109/humanoids.2018.8624977
|View full text |Cite
|
Sign up to set email alerts
|

Planning with a Receding Horizon for Manipulation in Clutter Using a Learned Value Function

Abstract: Manipulation in clutter requires solving complex sequential decision making problems in an environment rich with physical interactions. The transfer of motion planning solutions from simulation to the real world, in open-loop, suffers from the inherent uncertainty in modelling real world physics. We propose interleaving planning and execution in real-time, in a closed-loop setting, using a Receding Horizon Planner (RHP) for pushing manipulation in clutter. In this context, we address the problem of finding a s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
30
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 29 publications
(30 citation statements)
references
References 23 publications
0
30
0
Order By: Relevance
“…8), which can be generalized to new environments without training in the same environment. In terms of interleaving planning and execution in real time and closed-loop settings, Bejjani et al [114] implemented a receding horizon planner (RHP) for pushing manipulation in clutter, as shown in Fig. 9.…”
Section: B Suction and Multifunctional Graspingmentioning
confidence: 99%
“…8), which can be generalized to new environments without training in the same environment. In terms of interleaving planning and execution in real time and closed-loop settings, Bejjani et al [114] implemented a receding horizon planner (RHP) for pushing manipulation in clutter, as shown in Fig. 9.…”
Section: B Suction and Multifunctional Graspingmentioning
confidence: 99%
“…The autonomous solutions to the reaching through clutter problem can be categorized into three groups: There are sampling-based planning approaches [5], [6], [9], trajectory optimization based approaches [3], [14], and learning-based approaches [4], [7], [15], [16]. While these approaches show varying degrees of success, the difficult instances of this problem are still challenging for autonomous systems, due to the problem being high-dimensional and under-actuated, and also due to real-world physics uncertainty.…”
Section: Related Workmentioning
confidence: 99%
“…By continuously updating the simulator state, where planning takes place, from real-world observations, RHP circumvents the problem of compounding modelling errors over long sequences of actions. Under the assumption of a fully observable environment, we have shown in our previous work how RHP can be used with a heuristic to guide physics-based roll-outs and to estimate the cost-to-go from the horizon to the goal [16]. This approach balances the advantages of model-based sequential reasoning with a model-free scalable heuristic [17], [18].…”
Section: Introductionmentioning
confidence: 99%