2018
DOI: 10.1007/978-3-030-00692-1_36
|View full text |Cite
|
Sign up to set email alerts
|

Silhouette-Based Action Recognition Using Simple Shape Descriptors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…Yang et al 8 calculated the relative position between joints to represent the action spatial information, and employs the Fourier time pyramid to model the temporal dynamics. There are also some silhouette-based 45,46 methods for action recognition. Yi githan Dedeo glu et al 45 classified actions using object silhouettes.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Yang et al 8 calculated the relative position between joints to represent the action spatial information, and employs the Fourier time pyramid to model the temporal dynamics. There are also some silhouette-based 45,46 methods for action recognition. Yi githan Dedeo glu et al 45 classified actions using object silhouettes.…”
Section: Related Workmentioning
confidence: 99%
“…They employed an adaptive background subtraction model for target segmentation and a template matching‐based supervised learning method for object classification. Katarzyna Gościewska et al 46 used a single scalar shape measure to represent each silhouette in an action sequence, and then combined the scalars into a vector that represents the entire sequence. However, hand‐crafted features are not enough to effectively characterize spatial–temporal information, and it is difficult to adapt to complex scene changes.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, [16] proposed a long-term motion descriptor called sequential Deep Trajectory Descriptor (sDTD) which feeds a CNN-RNN network with dense trajectories to learn an effective representation for long-term motion. On the other hand, silhouettes were exploited by [17], [18] after extracting them using background subtraction. Automatic feature extraction from RGB images through deep learning was also suggested in many works.…”
Section: Vision-based Human Activity Recognitionmentioning
confidence: 99%
“…Quan et al [35] leveraged multiple view information and pair it with silhouettes to learn binary features to represent the shape. Silhouette shape and optical point descriptors were also utilized to recognize human actions [17] [14]. Similarly, a sequence of images with silhouettes was used to conduct action recognition [5].…”
Section: Related Workmentioning
confidence: 99%