2018
DOI: 10.48550/arxiv.1806.11244
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

One-Shot Learning of Multi-Step Tasks from Observation via Activity Localization in Auxiliary Video

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…While, third-person imitation learning uses date from other agents or viewpoints [27,35]. Recent methods for one-shot imitation learning [8,11,13,40,41,42] can translate a single demonstration to an executable pol- icy. The most similar to ours is NTP [41] that also learns long-horizon tasks.…”
Section: Related Workmentioning
confidence: 99%
“…While, third-person imitation learning uses date from other agents or viewpoints [27,35]. Recent methods for one-shot imitation learning [8,11,13,40,41,42] can translate a single demonstration to an executable pol- icy. The most similar to ours is NTP [41] that also learns long-horizon tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Realworld tasks are often long-horizon and multi-step, posing severe challenges for simple techniques such as behavior cloning-based methods [2]. Recent works in one-shot imitation learning aim to mitigate this challenge by imposing modular or hierarchical task structures in order to learn reusable subtask policies [4,6,[11][12][13]. NTP [6] decomposes demonstration with hierarchical programs.…”
Section: Related Workmentioning
confidence: 99%