Proceedings of the Eighteenth International Conference on Principles of Knowledge Representation and Reasoning 2021
DOI: 10.24963/kr.2021/36
|View full text |Cite
|
Sign up to set email alerts
|

Safe Learning of Lifted Action Models

Abstract: Creating a domain model, even for classical, domain-independent planning, is a notoriously hard knowledge-engineering task. A natural approach to solve this problem is to learn a domain model from observations. However, model learning approaches frequently do not provide safety guarantees: the learned model may assume actions are applicable when they are not, and may incorrectly capture actions' effects. This may result in generating plans that will fail when executed. In some domains such failures are not acc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
33
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 14 publications
(33 citation statements)
references
References 14 publications
0
33
0
Order By: Relevance
“…SAM learning works by initially assuming every action has all the literals as preconditions and none of the literals as effects, and then applying rules 1 and 3 above to remove preconditions and add effects as needed. In a classical planning setting where all actions are grounded, Juba et al (2021) proved that the action model M SAM created by SAM learning is: (1) safe, in the sense that every plan consistent with M SAM is also consistent in the real action model, (2) probably complete, in the sense that with high probability, for most solvable problems there exists a plan that solves them and is consistent with M SAM , given a number of trajectories that is polynomial in the number of fluents and actions. They also extended SAM learning to learn lifted action models and provided similar safety and completeness guarantees.…”
Section: Sam Learningmentioning
confidence: 85%
See 3 more Smart Citations
“…SAM learning works by initially assuming every action has all the literals as preconditions and none of the literals as effects, and then applying rules 1 and 3 above to remove preconditions and add effects as needed. In a classical planning setting where all actions are grounded, Juba et al (2021) proved that the action model M SAM created by SAM learning is: (1) safe, in the sense that every plan consistent with M SAM is also consistent in the real action model, (2) probably complete, in the sense that with high probability, for most solvable problems there exists a plan that solves them and is consistent with M SAM , given a number of trajectories that is polynomial in the number of fluents and actions. They also extended SAM learning to learn lifted action models and provided similar safety and completeness guarantees.…”
Section: Sam Learningmentioning
confidence: 85%
“…The Safe Action Model (SAM) learning algorithm (Juba, Le, and Stern 2021;Stern and Juba 2017) has safety and completeness guarantees similar to those specified above. However, it is designed for classical planning, where states are fully observable and actions have deterministic effects.…”
Section: Sam Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…A large body of work involves learning for planning domains (Zimmerman and Kambhampati 2003;Arora et al 2018). While some approaches learn action models from data, they do not link these action models to policies for reaching specific goals (Amir and Chang 2008;Amado et al 2019;Asai and Muise 2020;Juba, Le, and Stern 2021).…”
Section: Related Workmentioning
confidence: 99%