2008
DOI: 10.1109/icpr.2008.4761701
|View full text |Cite
|
Sign up to set email alerts
|

A unified model for activity recognition from video sequences

Abstract: We propose an activity recognition algorithm that utilizes a uni ed spatial-frequency model of motion to recognize large-scale differences in action using global statistics, and subsequently distinguishes between motions with similar global statistics by spatially localizing the moving objects. We model the Fourier transforms of translating rigid objects in a video, since the Fourier domain inherently groups regions of the video with similar motion in high energy concentrations within its domain to make global… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
3
0

Year Published

2010
2010
2014
2014

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 7 publications
1
3
0
Order By: Relevance
“…These accuracy results are very encouraging, specially since we are using very sparse descriptors for the pose (just two per frame). Although higher accuracy results have been reported Ikizler & Duygulu (2007), accuracy of the proposed method is comparable to the well known work of Niebles et al (2006) and also with the recently reported results in Resendiz & Ahuja (2008) posted for the same dataset.…”
Section: Experiments With Weizman Actions Datasetsupporting
confidence: 85%
See 1 more Smart Citation
“…These accuracy results are very encouraging, specially since we are using very sparse descriptors for the pose (just two per frame). Although higher accuracy results have been reported Ikizler & Duygulu (2007), accuracy of the proposed method is comparable to the well known work of Niebles et al (2006) and also with the recently reported results in Resendiz & Ahuja (2008) posted for the same dataset.…”
Section: Experiments With Weizman Actions Datasetsupporting
confidence: 85%
“…These surfaces are computed on an extracted foreground of a person performing an action. In contrast to ; Niebles et al (2006); Resendiz & Ahuja (2008), we compute only a few of these surfaces per frame, in fact just two features per frame. Experimental validation on Weizman datasets confirms the stability and utility of our approach.…”
Section: Resultsmentioning
confidence: 99%
“…These surfaces are computed on an extracted foreground of a person performing an action. In contrast to [12,10,11], we compute only a few of these surfaces per frame, in fact just two features per frame. Experimental validation on Weizman datasets confirms the stability and utility of our approach.…”
Section: Discussionmentioning
confidence: 99%
“…5. Cross-validation results for action recognition of the Weizman dataset when the foreground patch is divided into an upper and a lower part for computing the self-similarity surface.reported results in [11] posted for the same dataset.…”
Section: A Experiments With Wizman Action Datasetmentioning
confidence: 99%