2002
DOI: 10.1613/jair.839
|View full text |Cite
|
Sign up to set email alerts
|

Policy Recognition in the Abstract Hidden Markov Model

Abstract: In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem on-line plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process representing the execution of the agent's plan. Our contributions in this paper are twofold. In terms of probabilistic inference, we introduce the Abstract Hidden Markov Model (AHMM), a novel type of … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
140
0
1

Year Published

2004
2004
2013
2013

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 164 publications
(141 citation statements)
references
References 40 publications
0
140
0
1
Order By: Relevance
“…These approaches use sophisticated, hierarchical representations of goals and subtasks, such as scripts and event hierarchies, to model the structure of agents' behavior, and model goal inference in terms of logical sufficiency or necessity of the observed behavior for achieving a particular goal. Probabilistic versions of these ideas have also been proposed, which allow inductive, graded inferences of structured goals and plans from observations of behavior (Charniak & Goldman, 1991;Bui et al, 2002;Liao et al, 2004). However, these approaches assume that the distribution over actions, conditioned on goals, is either available a priori (Charniak & Goldman, 1991;Bui et al, 2002), or must be estimated from a large dataset of observed actions (Liao et al, 2004).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These approaches use sophisticated, hierarchical representations of goals and subtasks, such as scripts and event hierarchies, to model the structure of agents' behavior, and model goal inference in terms of logical sufficiency or necessity of the observed behavior for achieving a particular goal. Probabilistic versions of these ideas have also been proposed, which allow inductive, graded inferences of structured goals and plans from observations of behavior (Charniak & Goldman, 1991;Bui et al, 2002;Liao et al, 2004). However, these approaches assume that the distribution over actions, conditioned on goals, is either available a priori (Charniak & Goldman, 1991;Bui et al, 2002), or must be estimated from a large dataset of observed actions (Liao et al, 2004).…”
Section: Related Workmentioning
confidence: 99%
“…Probabilistic versions of these ideas have also been proposed, which allow inductive, graded inferences of structured goals and plans from observations of behavior (Charniak & Goldman, 1991;Bui et al, 2002;Liao et al, 2004). However, these approaches assume that the distribution over actions, conditioned on goals, is either available a priori (Charniak & Goldman, 1991;Bui et al, 2002), or must be estimated from a large dataset of observed actions (Liao et al, 2004). An alternative is to model the abstract principles underlying intentional action, which can be used to generate action predictions in novel situations, without requiring a large dataset of prior observations.…”
Section: Related Workmentioning
confidence: 99%
“…The switching nodes f g k , f t k and f m k indicate when changes in a variable's value can happen. An efficient algorithm based on Rao-Blackwellised particle filters [14][15][16] has been developed to perform online inference for this model. At the lowest level, location tracking on the street map is done using graph-based Kalman filtering that is more efficient than the grid-based Bayesian filter and traditional particle filtering [17], used for the 2TBN model.…”
Section: A New Hierarchical Modelmentioning
confidence: 99%
“…Bui, et al [15] introduced the abstract hidden Markov model which uses hierarchical representations to efficiently infer a person's goal in an indoor environment from camera information. Later, Bui [16] extended this model to include memory nodes, which enables the transfer of context information over multiple time steps.…”
Section: Homementioning
confidence: 99%
“…Most previous work on plan and goal recognition has assumed cooperative or neutral agents [22,33,7,6,21]. Cohen, Perrault and Allen [8] distinguish between two kinds of plan recognition: keyhole and intended.…”
Section: Introductionmentioning
confidence: 99%