2019
DOI: 10.1609/aaai.v33i01.33017715
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning for Cost-Optimal Planning: Task-Dependent Planner Selection

Abstract: As classical planning is known to be computationally hard, no single planner is expected to work well across many planning domains. One solution to this problem is to use online portfolio planners that select a planner for a given task. These portfolios perform a classification task, a well-known and wellresearched task in the field of machine learning. The classification is usually performed using a representation of planning tasks with a collection of hand-crafted statistical features. Recent techniques in m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 21 publications
(26 citation statements)
references
References 20 publications
0
26
0
Order By: Relevance
“…However, it differs from our method in that it requires a manually-engineered input representation for each domain, whereas our method can easily be applied to any planning problem expressed as (P)PDDL. Sievers, Katz, Sohrabi, Samulowitz, and Ferber (2019) have also used traditional (imagebased) convolutional neural networks for planning-related tasks. Instead of learning a generalised policy like the present work or like Groshev et al (2018), they learn to perform planner selection in classical planning domains.…”
Section: Knowledge Representationsmentioning
confidence: 99%
“…However, it differs from our method in that it requires a manually-engineered input representation for each domain, whereas our method can easily be applied to any planning problem expressed as (P)PDDL. Sievers, Katz, Sohrabi, Samulowitz, and Ferber (2019) have also used traditional (imagebased) convolutional neural networks for planning-related tasks. Instead of learning a generalised policy like the present work or like Groshev et al (2018), they learn to perform planner selection in classical planning domains.…”
Section: Knowledge Representationsmentioning
confidence: 99%
“…Single Planner; multiple splits: We additionally report in Table 3 the results of multiple splits, for the lifted graphs, following (Sievers et al 2019a). See the top three rows.…”
Section: Effectiveness Of Graph Neural Networkmentioning
confidence: 99%
“…For example, convolutional neural networks (CNN) take the raw pixels as input and learn the feature representation of an image through layers of convolutional transformations and abstractions, which result in a feature vector that captures the most important characteristics of the image (Krizhevsky, Sutskever, and Hinton 2012). A successful example in the context of planning is Delfi (Katz et al 2018;Sievers et al 2019a), which treats a planning task as an image and applies CNN to predict the probability that a certain planner solves the task within the time limit. Delfi won the first place in the Optimal Track of the 2018 International Planning Competition (IPC).…”
Section: Introductionmentioning
confidence: 99%
“…What is learned? Existing DL approaches may be split into four categories: learning domain descriptions (Say et al 2017;Asai and Fukunaga 2018), policies (Buffet and Aberdeen 2009;Toyer et al 2018;Groshev et al 2018;Issakkimuthu, Fern, and Tadepalli 2018;Garg, Bajpai, and Mausam 2019), heuristics (Samadi, Felner, and Schaeffer 2008;Arfaee, Zilles, and Holte 2010;Thayer, Dionne, and Ruml 2011;Gomoluch et al 2017), and planner selection (Sievers et al 2019). Our work is concerned with learning heuristics.…”
Section: Related Workmentioning
confidence: 99%
“…Most existing DL approaches to planning use standard architectures, and rely on hand-engineered features or encodings of planning problems as images. For instance, Sievers et al (2019) train Convolutional Neural Networks (CNNs) over graphical representations of planning problems converted into images, to determine which planner should be invoked for a planning task. For learning generalised policies and heuristics, Groshev et al (2018) train CNNs and Graph Convolutional Networks with images obtained via a domain-specific hand-coded problem conversion.…”
Section: Related Workmentioning
confidence: 99%