Proceedings of the 22nd International Conference on Machine Learning - ICML '05 2005
DOI: 10.1145/1102351.1102394
|View full text |Cite
|
Sign up to set email alerts
|

Learning approximate preconditions for methods in hierarchical plans

Abstract: A significant challenge in developing planning systems for practical applications is the difficulty of acquiring the domain knowledge needed by such systems. One method for acquiring this knowledge is to learn it from plan traces, but this method typically requires a huge number of plan traces to converge. In this paper, we show that the problem with slow convergence can be circumvented by having the learner generate solution plans even before the planning domain is completely learned. Our empirical results sh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 25 publications
(23 citation statements)
references
References 7 publications
0
23
0
Order By: Relevance
“…Garland et al [18] presented an approach implemented in a development environment for constructing and maintaining a hierarchical task model from a set of annotated examples provided by domain experts, where the task model constructed did not include preconditions or effects, i.e., without methods' preconditions, actions' preconditions or actions' effects. Ilghami et al [29] and Xu and Muñoz-Avila [65] proposed eager (in the form of version spaces) and lazy learning (in the form of case-based reasoning) algorithms respectively to learn the preconditions of HTN methods, given as input the hierarchical relationships between tasks, the action models, and a complete description of the intermediate states. Nejati et al [46,51] used means-end analysis to learn structures and preconditions of the input plans, assuming that a model of the tasks in the form of Horn clauses was given.…”
Section: Htn Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Garland et al [18] presented an approach implemented in a development environment for constructing and maintaining a hierarchical task model from a set of annotated examples provided by domain experts, where the task model constructed did not include preconditions or effects, i.e., without methods' preconditions, actions' preconditions or actions' effects. Ilghami et al [29] and Xu and Muñoz-Avila [65] proposed eager (in the form of version spaces) and lazy learning (in the form of case-based reasoning) algorithms respectively to learn the preconditions of HTN methods, given as input the hierarchical relationships between tasks, the action models, and a complete description of the intermediate states. Nejati et al [46,51] used means-end analysis to learn structures and preconditions of the input plans, assuming that a model of the tasks in the form of Horn clauses was given.…”
Section: Htn Learningmentioning
confidence: 99%
“…An alternative to HTNLearn also solving this problem would be to learn the action models with ARMS [68] and separately learn the method preconditions with an existing algorithm such as CaMeL [29]. To determine the importance of learning the action models and method preconditions simultaneously, we ran an experiment comparing HTNLearn against a hybrid system, that we call it ARMS + , which first uses ARMS to learn the action models and then uses the method-based constraints to learn the method preconditions.…”
Section: Accuracy With Complete Decomposition Treesmentioning
confidence: 99%
See 1 more Smart Citation
“…In (Ilghami et al 2002;Ilghami et al 2005) the authors propose a learning system for HTNs, where a domain expert solves task nets giving examples to the learner. The learner now generalizes based on the training examples from the human expert and can solve similar tasks in a better way.…”
Section: Hierarchical Task Networkmentioning
confidence: 99%
“…Most systems that learn procedural knowledge acquire flat structures [15,2,9]. A few learning systems output hierarchical task structures, but they assume that the structure of the hierarchy is either given or determined by supervised feedback [5,6]. There have been some attempts on learning the structure of a hierarchy from observed solutions, but they suffer from over-generality or over-specificity of the learned knowledge [4,12].…”
Section: Introductionmentioning
confidence: 99%