2021 30th IEEE International Conference on Robot &Amp; Human Interactive Communication (RO-MAN) 2021
DOI: 10.1109/ro-man50785.2021.9515548
|View full text |Cite
|
Sign up to set email alerts
|

Learning Task Constraints in Visual-Action Planning from Demonstrations

Abstract: Visual planning approaches have shown great success for decision making tasks with no explicit model of the state space. Learning a suitable representation and constructing a latent space where planning can be performed allows non-experts to setup and plan motions by just providing images. However, learned latent spaces are usually not semantically-interpretable, and thus it is difficult to integrate task constraints. We propose a novel framework to determine whether plans satisfy constraints given demonstrati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
(19 reference statements)
0
1
0
Order By: Relevance
“…One common approach for extracting knowledge from demonstrations is policy learning, where the goal is to learn a policy that maps the input space into the output space (usually the state and action spaces, respectively) and that can accurately mimic the demonstrations. Some approaches bypass explicit state characterization by defining as input raw data from the system with deep learning methods [13], [14]. Another approach that is widely used is based on Markov Decision Processes.…”
Section: A Learning From Demonstrationsmentioning
confidence: 99%
“…One common approach for extracting knowledge from demonstrations is policy learning, where the goal is to learn a policy that maps the input space into the output space (usually the state and action spaces, respectively) and that can accurately mimic the demonstrations. Some approaches bypass explicit state characterization by defining as input raw data from the system with deep learning methods [13], [14]. Another approach that is widely used is based on Markov Decision Processes.…”
Section: A Learning From Demonstrationsmentioning
confidence: 99%