2012
DOI: 10.1007/s11042-012-1084-2
|View full text |Cite
|
Sign up to set email alerts
|

Continuous human action recognition in real time

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 30 publications
0
6
0
Order By: Relevance
“…Different from the published CHAR methods [ 18 , 19 , 20 ], the proposed algorithm does not detect the start and end points of each human action. We divide feature sequences into pose feature segments and motion feature segments, and thus poses and movements of each human action are embedded in the feature segments.…”
Section: Resultsmentioning
confidence: 97%
See 2 more Smart Citations
“…Different from the published CHAR methods [ 18 , 19 , 20 ], the proposed algorithm does not detect the start and end points of each human action. We divide feature sequences into pose feature segments and motion feature segments, and thus poses and movements of each human action are embedded in the feature segments.…”
Section: Resultsmentioning
confidence: 97%
“…However, it is a tricky problem to decide the sliding window size for such methods. A generative model based on the bag-of-words representation and the translation and scale invariant probabilistic Latent Semantic Analysis model (TSI-pLSA) is proposed in [ 18 ], the start and end frames of one human action are detected according the posterior probability using a threshold-based method. In [ 19 ], the authors use Hidden Markov Model (HMM) based action modeling method to model various human actions, and employ action spotter method to filter meaningless human actions and to detect the start and end points of human actions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In [10], frame analysis is employed and 81.0% accuracy is reported on the IXMAS dataset. In the case of the Weizmann dataset, for example in [9], CHAR is performed and a score of 97.8% is reached. Segment analysis is employed in this case, although the rate of correctly classified segments is computed based on a 60% overlap with the ground truth.…”
Section: Resultsmentioning
confidence: 99%
“…The first option performed better, since it does not classify specific temporal moments, but aligns a globally optimal segmentation taking into account movement direction. In [9], start and end key frames of actions are identified. Segmentation is performed based on the posterior probability of model matching considering recognition rounds.…”
Section: Related Workmentioning
confidence: 99%