2012
DOI: 10.1007/978-3-642-35749-7_13
|View full text |Cite
|
Sign up to set email alerts
|

Middle-Level Representation for Human Activities Recognition: The Role of Spatio-Temporal Relationships

Abstract: Abstract. We tackle the challenging problem of human activity recognition in realistic video sequences. Unlike local features-based methods or global template-based methods, we propose to represent a video sequence by a set of middle-level parts. A part, or component, has consistent spatial structure and consistent motion. We first segment the visual motion patterns and generate a set of middle-level components by clustering keypoints-based trajectories extracted from the video. To further exploit the interdep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
18
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 16 publications
(18 citation statements)
references
References 27 publications
0
18
0
Order By: Relevance
“…Several pedestrians are present in the videos as well, preventing the recognition. The UT-Interaction dataset was used for the human activity recognition contest (SDHA 2010) [14], and it has been tested by several state-of-the-art methods [17,18,19].…”
Section: Datasetmentioning
confidence: 99%
“…Several pedestrians are present in the videos as well, preventing the recognition. The UT-Interaction dataset was used for the human activity recognition contest (SDHA 2010) [14], and it has been tested by several state-of-the-art methods [17,18,19].…”
Section: Datasetmentioning
confidence: 99%
“…half observation full observation Integral BoW [6] 65% 81.7% Dynamic BoW [6] 70% 85 % Cuboid + Bayesian [6] 25% 71.7% Cuboid + SVMs [7] 31.7% 85% BP-SVM [8] -83.3% Pose 'Doublet' [12] -79.17% Mid-level [4] -78.2% Our proposed 80% 91.7% Table 1: Comparison of classification results on UTInteraction.…”
Section: Methodsmentioning
confidence: 97%
“…Accuracy D-BoW [27] 85.0% I-BoW [27] 81.7% Cuboid SVM [26] 85.0% BP-SVM [19] 83.3% Cuboid/Bayesian [27] 71.7% DP-SVM [34] 14.6% Yu et al [50] 83.3% Yuan et al [51] 78.2% Waltisberg et al [41] 88.0% Ours 90.0% Diff. to State-of-the-Art +2.0% Table 8.…”
Section: Methodsmentioning
confidence: 99%