2014 IEEE Conference on Computer Vision and Pattern Recognition 2014
DOI: 10.1109/cvpr.2014.338
|View full text |Cite
|
Sign up to set email alerts
|

From Stochastic Grammar to Bayes Network: Probabilistic Parsing of Complex Activity

Abstract: We propose a probabilistic method for parsing a temporal sequence such as a complex activity defined as composition of sub-activities/actions. The temporal structure of the high-level activity is represented by a string-length limited stochastic context-free grammar. Given the grammar, a Bayes network, which we term Sequential Interval Network (SIN), is generated where the variable nodes correspond to the start and end times of component actions. The network integrates information about the duration of each pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
44
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 72 publications
(44 citation statements)
references
References 21 publications
0
44
0
Order By: Relevance
“…First, a classical SVM misclassification penalty term 35 and second, a term that is related to the saliency within the bounding box. The proposed SVM tries to find the class label and the location of bounding box that optimally balance the minimization of the missclassification cost once a linear classifier is applied on features extracted within the bounding box in question, and the maximization of the sum of the predicted saliencies within the bounding 40 box.…”
Section: Accepted Manuscriptmentioning
confidence: 99%
See 1 more Smart Citation
“…First, a classical SVM misclassification penalty term 35 and second, a term that is related to the saliency within the bounding box. The proposed SVM tries to find the class label and the location of bounding box that optimally balance the minimization of the missclassification cost once a linear classifier is applied on features extracted within the bounding box in question, and the maximization of the sum of the predicted saliencies within the bounding 40 box.…”
Section: Accepted Manuscriptmentioning
confidence: 99%
“…Usually features based on shape, such as HOG and Scale- and Latent Dirichlet Allocation (LDA) [31] can be used. Action grammars [32, A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT 33,34,35], models that use graph relations [18,36] and latent SVMs [37,38,39] are also used to model the higher levels of action recognition frameworks.…”
Section: Related Workmentioning
confidence: 99%
“…Qualitative Result: in Figure 7, some example posterior distribution outputs when running our method on a sequence in streaming mode are shown (we encourage readers to watch the supplementary video [3]). At first no observation is available; the distributions are determined by the prior information about the start of the task (which we set to be a uniform in first 30s) and duration models of primitives.…”
Section: Toy Assembly Task Experimentsmentioning
confidence: 99%
“…This is an enhanced version of our previous conference paper [3]. We have added more detail and discussion, as well as additional experiments to validate the proposed method.…”
Section: Introductionmentioning
confidence: 99%
“…An abnormal activity is detected if it has low likelihood under a criterion. Typical approaches include Dynamic Bayesian Networks (DBNs) (Swears et al, 2014;Vo and Bobick, 2014) such as Hidden Markov Models (HMM) (Banerjee and Nevatia, 2014). The probabilistic topic models (PTMs) (Kinoshita et al, 2014) such as Latent Dirichlet Allocation (LDA) (Hospedales et al, 2011) or Hierarchical Dirichlet Process (HDP) (Kuettel et al, 2010) are powerful methods to learn activities in surveillance videos.…”
Section: Introductionmentioning
confidence: 99%