Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT) 2014
DOI: 10.1109/icccnt.2014.6963015
|View full text |Cite
|
Sign up to set email alerts
|

Ridge body parts features for human pose estimation and recognition from RGB-D video data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 93 publications
(39 citation statements)
references
References 22 publications
0
39
0
Order By: Relevance
“…By proposing the body surface context features, human action recognition is robust to translations and rotations. As with Jalal's work in [10], Song's work [12] still depends on static scenes with an embedded sensing infrastructure. Current activity recognitions in such lifelogging settings usually assume there is only one actor in the scene and how these solutions can scale up to more realistic and challenging settings such as outdoors are difficult.…”
Section: In-situ Visual Lifeloggingmentioning
confidence: 99%
See 1 more Smart Citation
“…By proposing the body surface context features, human action recognition is robust to translations and rotations. As with Jalal's work in [10], Song's work [12] still depends on static scenes with an embedded sensing infrastructure. Current activity recognitions in such lifelogging settings usually assume there is only one actor in the scene and how these solutions can scale up to more realistic and challenging settings such as outdoors are difficult.…”
Section: In-situ Visual Lifeloggingmentioning
confidence: 99%
“…This means that human activities can be captured through sensors such as video cameras installed in the local infrastructure, therefore the recording is highly dependent on instrumented environments, such as PlaceLab (MIT) [8]. Typical use of video sensors for in-situ sensing also includes works as reported in [9,10,11,12] and [13] . Jalal et al [11] proposed a depth video-based activity recognition system for smart spaces based on feature transformation and HMM recognition.…”
Section: In-situ Visual Lifeloggingmentioning
confidence: 99%
“…where, P(O | h l ) denoted the probability of likelihood of the h activity HMM among different number of activities [26][27][28]. Fig.…”
Section: Modified Hidden Markov Model (M-hmm)mentioning
confidence: 99%
“…It determines the color in the image can appear most color number or the maximum gray level in the gray image. Image content understanding depth estimation method is mainly through classifying every scene in the image block, and then for each category of scenery with respective applicable method to estimate the depth of their information [23][24][25][26][27].…”
Section: The Intensity and Depth Pixels Problemmentioning
confidence: 99%