2018
DOI: 10.48550/arxiv.1808.05249
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

LSTM-Based Goal Recognition in Latent Space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…MaxEnt IRL was proposed by Ziebart et al [17], which uses a probability approach to solve the ill-posed problem in the original IRL. Zeng et al modified MaxEnt IRL to solve the goal recognition problem in dynamic environment [18]. However, this work does not consider the situation when the agent implements deceptive plan and the proposed method does not work well in this scenario.…”
Section: Learning In Goal/plan Recognitionmentioning
confidence: 99%
“…MaxEnt IRL was proposed by Ziebart et al [17], which uses a probability approach to solve the ill-posed problem in the original IRL. Zeng et al modified MaxEnt IRL to solve the goal recognition problem in dynamic environment [18]. However, this work does not consider the situation when the agent implements deceptive plan and the proposed method does not work well in this scenario.…”
Section: Learning In Goal/plan Recognitionmentioning
confidence: 99%
“…The search space generated by Latplan was shown to be compatible to an existing Goal Recognition system (Amado et al 2018a;2018b). Another recent approach replacing SAE/AMA 2 with InfoGAN (Kurutach et al 2018) has no explicit mechanism for improving the stability of the binary representation.…”
Section: Related Workmentioning
confidence: 99%
“…While there are several learning-based AMA methods that approximate AMA 1 (e.g. AMA 2 (Asai & Fukunaga, 2018) and Action Learner (Amado, Pereira, Aires, Magnaguagno, Granada, & Meneguzzi, 2018b;Amado, Aires, Pereira, Magnaguagno, Granada, & Meneguzzi, 2018a)), there is information loss between the learned action model and the original search space generated.…”
Section: Baseline Performance Experimentsmentioning
confidence: 99%