2015
DOI: 10.1016/j.image.2015.02.004
|View full text |Cite
|
Sign up to set email alerts
|

Informative joints based human action recognition using skeleton contexts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
51
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 67 publications
(51 citation statements)
references
References 22 publications
0
51
0
Order By: Relevance
“…Several previous works [13], [50] have shown that in each action sequence, there is often a subset of informative joints which are important as they contribute much more to action analysis, while the remaining ones may be irrelevant (or even noisy) for this action. As a result, to obtain a high accuracy of action recognition, we need to identify the informative skeletal joints and concentrate more on their features, meanwhile trying to ignore the features of the irrelevant ones, i.e., selectively focusing (attention) on the informative joints is useful for human action recognition.…”
Section: B Global Context-aware Attention Lstmmentioning
confidence: 99%
“…Several previous works [13], [50] have shown that in each action sequence, there is often a subset of informative joints which are important as they contribute much more to action analysis, while the remaining ones may be irrelevant (or even noisy) for this action. As a result, to obtain a high accuracy of action recognition, we need to identify the informative skeletal joints and concentrate more on their features, meanwhile trying to ignore the features of the irrelevant ones, i.e., selectively focusing (attention) on the informative joints is useful for human action recognition.…”
Section: B Global Context-aware Attention Lstmmentioning
confidence: 99%
“…There are only a few attempts in previous works to consider the effect from view variations. A general treatment employs a pre-processing step to transform the 3D joint coordinates from the camera coordinate system to a personcentric coordinate system by placing the body center at the origin, followed by rotating the skeleton such that the body plane is parallel to the (x, y)-plane, to make the skeleton data invariant to absolute location, and the body orientation [45,39,5,51,18,31,24,35]. Such a pre-processing gains partial view-invariant.…”
Section: Introductionmentioning
confidence: 99%
“…Even though the depth cameras have generally a better quality of 3D action data than those estimated from monocular video sensors, adopting the 3D joint positions for human-object interaction is not sufficient to classify actions that 35 includes interaction with objects. During a human object interaction scene, the hands may hold objects thus are hardly detected or recognized due to heavy occlusions and appearance variations [13].…”
mentioning
confidence: 99%
“…Recently, with the development of the commodity depth sensors like Kinect, there has been a lot of interests in human action recognition from depth data such as [9], [8], [10], [11], [12], [34], [35], [36]. Instead of covering all ideas exhaustively, we direct interested readers to some recent surveys [7], [37], [38],…”
mentioning
confidence: 99%