2020
DOI: 10.48550/arxiv.2002.00412
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Combating False Negatives in Adversarial Imitation Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…For instance, Fu et al (2017) explored how rewards can generalize to training policies under changing dynamics. However, most prior work focuses on improving policy generalization to unseen task settings by addressing challenges introduced by the adversarial training objective of GAIL (Xu & Denil, 2019;Zolna et al, 2020;Lee et al, 2021;Barde et al, 2020;Jaegle et al, 2021;Dadashi et al, 2020). Finally, in contrast to most related work on generalization, our work focuses on analyzing and improving reward function transfer to new task settings.…”
Section: Background and Related Workmentioning
confidence: 99%
“…For instance, Fu et al (2017) explored how rewards can generalize to training policies under changing dynamics. However, most prior work focuses on improving policy generalization to unseen task settings by addressing challenges introduced by the adversarial training objective of GAIL (Xu & Denil, 2019;Zolna et al, 2020;Lee et al, 2021;Barde et al, 2020;Jaegle et al, 2021;Dadashi et al, 2020). Finally, in contrast to most related work on generalization, our work focuses on analyzing and improving reward function transfer to new task settings.…”
Section: Background and Related Workmentioning
confidence: 99%