2021
DOI: 10.48550/arxiv.2107.04169
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Safe Learning of Lifted Action Models

Abstract: Creating a domain model, even for classical, domainindependent planning, is a notoriously hard knowledgeengineering task. A natural approach to solve this problem is to learn a domain model from observations. However, model learning approaches frequently do not provide safety guarantees: the learned model may assume actions are applicable when they are not, and may incorrectly capture actions' effects. This may result in generating plans that will fail when executed. In some domains such failures are not accep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…Model-based goal recognition focuses on learning the action model and domain theory of the recognizer. Amir et al [20,[29][30][31] employed various learning methods to study behavior models, but have not established a link between these models and the recognizer's strategy. Zeng et al [32] used inverse reinforcement learning to learn the recognizer's reward and implemented a Markov-based goal recognition algorithm.…”
Section: Goal Recognition As Learningmentioning
confidence: 99%
“…Model-based goal recognition focuses on learning the action model and domain theory of the recognizer. Amir et al [20,[29][30][31] employed various learning methods to study behavior models, but have not established a link between these models and the recognizer's strategy. Zeng et al [32] used inverse reinforcement learning to learn the recognizer's reward and implemented a Markov-based goal recognition algorithm.…”
Section: Goal Recognition As Learningmentioning
confidence: 99%