Robotics: Science and Systems XV 2019
DOI: 10.15607/rss.2019.xv.050
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Experience in Lazy Search

Abstract: Lazy graph search algorithms are efficient at solving motion planning problems where edge evaluation is the computational bottleneck. These algorithms work by lazily computing the shortest potentially feasible path, evaluating edges along that path, and repeating until a feasible path is found. The order in which edges are selected is critical to minimizing the total number of edge evaluations: a good edge selector chooses edges that are not only likely to be invalid, but also eliminates future paths from cons… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 29 publications
0
11
0
Order By: Relevance
“…Apart from the aforementioned end-to-end methods, value-based RL can also be used as Module Solution [71][72][73][74][75][76]. For example, value-based RL can be used as heuristics for motion-planning algorithms, such as A* and RRT [71][72][73].…”
Section: Motion Planning With Value-based Rl Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Apart from the aforementioned end-to-end methods, value-based RL can also be used as Module Solution [71][72][73][74][75][76]. For example, value-based RL can be used as heuristics for motion-planning algorithms, such as A* and RRT [71][72][73].…”
Section: Motion Planning With Value-based Rl Methodsmentioning
confidence: 99%
“…Huh et al [73] proposed a learning softmax node selection method based on Q-function approximations to improve the random sampling strategy of sampling-based motion-planning algorithms. Bhardwaj et al [74] addressed the shortest path problem by integrating RL to the lazy graph search algorithm. To be specific, the edge selection component is mapped to MDP and solved by the tabular Q-learning method.…”
Section: Motion Planning With Value-based Rl Methodsmentioning
confidence: 99%
“…GLS [12] uses priors to quickly invalidate subpaths until the shortest path is found. STROLL [13] learns an edge evaluation policy for LazySP. BISECT [14] and DIRECT [15] formalize Bayesian motion planning and compute near Bayes-optimal policies for finding feasible paths.…”
Section: Related Work a Priors In Lazy Searchmentioning
confidence: 99%
“…We begin by presenting a framework for lazy search algorithms that uses priors, thus unifying several previous works in this area [3,13,14,26]. In Experienced Lazy Path Search (Algorithm 1), a proposer lazily computes a path from the start to goal (without any edge evaluation) and a path validator chooses edges along the path to evaluate.…”
Section: A Experienced Lazy Path Searchmentioning
confidence: 99%
“…MPNet generalizes to unseen environments, but it still learns through imitation. The StrOLL algorithm [5] that tackles the problem of lazy search in graphs for path planning also uses the notion of "oracle", but it learns planning policies by imitating an oracle that has full knowledge about the environment map at training time, and can compute optimal decisions. This algorithm does not consider non-holonomic constraints or path smoothness, thus it is not directly applicable to our problem.…”
Section: Related Workmentioning
confidence: 99%