2007 IEEE Conference on Computer Vision and Pattern Recognition 2007
DOI: 10.1109/cvpr.2007.383168
|View full text |Cite
|
Sign up to set email alerts
|

Searching Video for Complex Activities with Finite State Models

Abstract: We

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
50
0
1

Year Published

2009
2009
2022
2022

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 81 publications
(51 citation statements)
references
References 33 publications
0
50
0
1
Order By: Relevance
“…Hidden semi-Markov Models (HSMM) [10] , CRFs [31], and finite-state-machines [13] have been used to model the temporal evolution of human activities. Recently, Tang et al [32] propose a conditional variant of HSMM incorporating the max-margin framework in the training phase.…”
Section: Sequential Modelsmentioning
confidence: 99%
“…Hidden semi-Markov Models (HSMM) [10] , CRFs [31], and finite-state-machines [13] have been used to model the temporal evolution of human activities. Recently, Tang et al [32] propose a conditional variant of HSMM incorporating the max-margin framework in the training phase.…”
Section: Sequential Modelsmentioning
confidence: 99%
“…This approach has difficulties in aligning non-repetitive actions in complex scenes. Moreover, some researchers model the configuration of the human body and its evolution in the time domain [9,10], and others solely perform action recognition from still images by computing pose primitives [11,12].…”
Section: Human Action Recognitionmentioning
confidence: 99%
“…Several works have considered a general approach of action recognition, for instance aiming to distinguish among several different activities like walking, jogging, waving, running, boxing and clapping [4,5]. The state-of-the-art research focus the limb tracking to model the human activities [6], an approach that is limited to high resolution targets and uncluttered environments [7]. In order to cope with cluttered environments, several works model activities using motion-based features [8,3], shape-based features [9], space-time interest points [4] or a combination of some of the above features [10].…”
Section: Related Workmentioning
confidence: 99%