2021
DOI: 10.1609/aaai.v35i13.17421
|View full text |Cite
|
Sign up to set email alerts
|

Planning with Learned Object Importance in Large Problem Instances using Graph Neural Networks

Abstract: Real-world planning problems often involve hundreds or even thousands of objects, straining the limits of modern planners. In this work, we address this challenge by learning to predict a small set of objects that, taken together, would be sufficient for finding a plan. We propose a graph neural network architecture for predicting object importance in a single inference pass, thus incurring little overhead while greatly reducing the number of objects that must be considered by the planner. Our approach treats … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(8 citation statements)
references
References 28 publications
0
8
0
Order By: Relevance
“…By contrast, efficient simulation means choosing the right idealization (Davis & Marcus, 2015;Fisher, 2006). In practice, systems that predict and plan over appropriately reduced representations are also more efficient (Agia et al, 2021;Silver et al, 2021). AI systems that reason more reasonably about physics could benefit from incorporating the same shortcuts that humans might be using, including limited samples (Battaglia et al, 2013;Hamrick et al, 2015), simplified shape representations (Smith et al, 2019;Ullman et al, 2017), or partial simulation as studied here.…”
Section: Discussionmentioning
confidence: 98%
“…By contrast, efficient simulation means choosing the right idealization (Davis & Marcus, 2015;Fisher, 2006). In practice, systems that predict and plan over appropriately reduced representations are also more efficient (Agia et al, 2021;Silver et al, 2021). AI systems that reason more reasonably about physics could benefit from incorporating the same shortcuts that humans might be using, including limited samples (Battaglia et al, 2013;Hamrick et al, 2015), simplified shape representations (Smith et al, 2019;Ullman et al, 2017), or partial simulation as studied here.…”
Section: Discussionmentioning
confidence: 98%
“…This allows such methods to create any feasible plan and not leverage commonsense knowledge to ensure that a goal state is reached in a few steps. Some recent works such as by Silver et al (2021) aim to learn object importance through human demonstrations. However, such methods can only work on objects seen previously in training and cannot generalize to unseen objects.…”
Section: Classical Planningmentioning
confidence: 99%
“…We ensure that in our domains the number of objects is large and they can be contained within/supported or transported by other objects as tools (details in Section 5). Similar to the work by Silver et al (2021), our model learns to prune-away irrelevant objects, additionally considering a domain with richer inter-object interactions. Consequently, our learner makes additional use of semantic properties and exploits correlations between actions and outputs interactions that are likely to lead to successful plans.…”
Section: Classical Planningmentioning
confidence: 99%
“…At the end, the optimal plan uses 11 trucks. Even if 98.9% of the truck objects were irrelevant, they still impacted the performance of the planners (Fuentetaja and de la Rosa 2016; Silver et al 2021). Bob ends up frustrated with the whole procedure.…”
Section: Introductionmentioning
confidence: 99%