2017
DOI: 10.1007/978-3-319-71246-8_23
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Inverse Reinforcement Learning with Linearly Solvable MDP

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…As a special case, Dvijotham and Todorov (2010) showed that maximum entropy IRL is the solution to a LMDP with uniform passive dynamics. Kohjima, Matsubayashi and Sawada (2017) proposed a Bayesian IRL method for learning state values for LMDPs using variational approximation.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…As a special case, Dvijotham and Todorov (2010) showed that maximum entropy IRL is the solution to a LMDP with uniform passive dynamics. Kohjima, Matsubayashi and Sawada (2017) proposed a Bayesian IRL method for learning state values for LMDPs using variational approximation.…”
mentioning
confidence: 99%
“…We present the first application of IRL for collective animal movement using Bayesian learning of state costs-to-go for an LMDP. As an extension of Kohjima, Matsubayashi and Sawada (2017), we reduce the dimension of the state space with basis function approximation, compare variational approximation to MCMC sampling, and consider the multiagent LMDP. We first demonstrate the modeling framework for a simulation of the Vicsek et al (1995) SPP model to illustrate the mechanisms of the LMDP framework in Section 3.…”
mentioning
confidence: 99%
“…As a special case, Dvijotham and Todorov (2010) showed the maximum entropy IRL is the solution to a LMDP with uniform passive dynamics. Kohjima et al (2017) proposed a Bayesian IRL method for learning state values for LMDPs using variational approximation.…”
Section: Introductionmentioning
confidence: 99%