2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9197574
|View full text |Cite
|
Sign up to set email alerts
|

Anytime Integrated Task and Motion Policies for Stochastic Environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…Refining each possible outcome in the policy can take a substantial amount of time. However, ATAM algorithm (Shah et al 2020) reduces the problem of selecting scenarios for refinement to a knapsack problem and use a greedy approach to prioritize more likely outcomes for refinement. The empirical evaluation shows that this approach allows the robot to start executing action much earlier compared to when actions are selected randomly.…”
Section: Stochastic Task and Motion Planningmentioning
confidence: 99%
See 2 more Smart Citations
“…Refining each possible outcome in the policy can take a substantial amount of time. However, ATAM algorithm (Shah et al 2020) reduces the problem of selecting scenarios for refinement to a knapsack problem and use a greedy approach to prioritize more likely outcomes for refinement. The empirical evaluation shows that this approach allows the robot to start executing action much earlier compared to when actions are selected randomly.…”
Section: Stochastic Task and Motion Planningmentioning
confidence: 99%
“…However, robot planning over a long horizon is challenging due to the continuous state and action spaces of the robot. Hierarchical approaches (Garrett, Lozano-Pérez, and Kaelbling 2020; Shah et al 2020) have shown that such abstractions can also be used for efficient robot planning. Unfortunately, these approaches require sound abstractions that are consistent with the motion planning of the robot.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Decision-theoretic task planning methods, and specifically Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs) are the most prevalent approaches for tackling various types of uncertainty in TAMP formulations (e.g., Şucan and Kavraki, 2012 ; Kaelbling and Lozano-Pérez, 2013 ; Hadfield-Menell et al, 2015 ). A recent work includes an anytime algorithm of TAMP generating policies for handling multiple execution-time contingencies using MDP-based modeling of actions which corresponds to an infinite set of motion planning problems ( Shah et al, 2020 ).…”
Section: Analysis Of the State-of-the-artmentioning
confidence: 99%
“…Such algorithms occur frequently in time-critical robotic applications and they raise decision-theoretic problems whose objective is to determine when to stop the computation, that is, stop improving the solution, and take action. In this work, we specifically focus on a class of anytime motion planning algorithms [2]- [4]. The goal of motion planning (in Figure 1) is to find a collision-free path from the initial configuration to the goal configuration such that the robot does not collide with any obstacles.…”
Section: Introductionmentioning
confidence: 99%