2013
DOI: 10.1109/tit.2012.2234824
|View full text |Cite
|
Sign up to set email alerts
|

The Principle of Maximum Causal Entropy for Estimating Interacting Processes

Abstract: Abstract-The principle of maximum entropy provides a powerful framework for estimating joint, conditional, and marginal probability distributions. However, there are many important distributions with elements of interaction and feedback where its applicability has not been established. This work presents the principle of maximum causal entropy-an approach based on directed information theory for estimating an unknown process based on its interactions with a known process. We demonstrate the breadth of the appr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
140
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 115 publications
(140 citation statements)
references
References 47 publications
(53 reference statements)
0
140
0
Order By: Relevance
“…It is often useful to consider the maximum entropy principle in its regularized form [Ziebart et al, 2013] , that is, instead of finding a maximum entropy distribution we want to find a distribution with the minimal KL divergence relative to a "prior" distribution p 0 (τ ) while matching the features of the demonstrator, that is,…”
Section: Information Theoretic Understanding Of Imitation Learning Almentioning
confidence: 99%
See 1 more Smart Citation
“…It is often useful to consider the maximum entropy principle in its regularized form [Ziebart et al, 2013] , that is, instead of finding a maximum entropy distribution we want to find a distribution with the minimal KL divergence relative to a "prior" distribution p 0 (τ ) while matching the features of the demonstrator, that is,…”
Section: Information Theoretic Understanding Of Imitation Learning Almentioning
confidence: 99%
“…Alternate prior distributions can be easily taken into account by simply adding a "feature" that is log p 0 (τ ) either with a weight fixed to 1.0 or allowed to adapt and learn. The maximum causal entropy distribution [Ziebart et al, 2013] can be understood to assume to remove the effects of stochastic dynamics as well. For learning tasks involving physical systems, it is often desirable to consider alternate p 0 (τ ), particularly by exploiting information in the system dynamics.…”
Section: Interpretation Of Irl With the Maximum Entropy Principlementioning
confidence: 99%
“…[4] develops a structural SOC-based model for estimation of mobile phone users' preferences using their observed data daily consumption. On the side of Inverse Reinforcement Learning, our framework is rooted in The Maximum Entropy IRL (MaxEnt-IRL) [5,6] method. Other relevant references to the Maximum Entropy IRL are Refs.…”
Section: Related Workmentioning
confidence: 99%
“…Given an initial guess for the optimal parameter θ (0) k , we can also consider a regularized version of the negative log-likelihood: 6 A more complex case of co-dependencies between rewards for individual customers can be considered, but we will not pursue this approach here. Note that this specification formally enables calibration at the level of an individual customer, in which case N would be equal to the number of consumption cycles observed for this user.…”
Section: Probabilities Of T -Steps Pathsmentioning
confidence: 99%
“…Next, we explain how the driver's objective inferred by inverse optimal control can be used to predict her behavior in new situations II-C. Maximum causal entropy inverse optimal control [19] is presented in II-D as an approach to account for suboptimal driver behavior. Following, we report the setting of the driving study, we conducted for evaluation in III.…”
Section: Contributionsmentioning
confidence: 99%