2013
DOI: 10.1007/978-3-642-40246-3_70
|View full text |Cite
|
Sign up to set email alerts
|

Human Action Recognition Using Temporal Segmentation and Accordion Representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
1
1

Relationship

3
3

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 15 publications
0
9
0
Order By: Relevance
“…In this work, our motion descriptor achieves 57.5%. It outperforms the approaches proposed in [25,26,11,14] and gives similar results with the MBH+STP descriptor [6].…”
Section: Hollywood2 Resultsmentioning
confidence: 58%
See 3 more Smart Citations
“…In this work, our motion descriptor achieves 57.5%. It outperforms the approaches proposed in [25,26,11,14] and gives similar results with the MBH+STP descriptor [6].…”
Section: Hollywood2 Resultsmentioning
confidence: 58%
“…The MBH+STP descriptor [6] achieves 74.9%. In our previous work [14] we obtained a mAP equal to 72.5%. As shown in table 2 our descriptor (mAP=75.6%) outperforms MBH+STP descriptor [6] as well as all methods proposed in [4,24,25,14] and even in some cases by a significant margin.…”
Section: Hollywood2 Resultsmentioning
confidence: 72%
See 2 more Smart Citations
“…Having no previous knowledge about the location of the person in each video frame, the human action in a video stream can be recovered from a great number of local descriptors extracted from the video frames (Sekma et al, 2013), (Dammak et al, 2012), , (Sekma et al, 2014). Local descriptors, coupled with the bag-of-words (BOW) encoding method (Sivic and Zisserman, 2003) (Mejdoub et al, 2008) (Mejdoub et al, 2007) have recently become a very popular video representation (Ben Aoun et al, 2014), (Knopp et al, 2010), (Laptev et al, 2008), (Wang et al, 2009), (Alexander et al, 2008), (Wang et al, 2011), (Raptis and Soatto, 2010), (Pyry et al, 2010), (Jiang et al, 2012) and (Jain et al, 2013).…”
Section: Intoductionmentioning
confidence: 99%