2018
DOI: 10.1155/2018/6271348
|View full text |Cite
|
Sign up to set email alerts
|

Human Pose Recognition Based on Depth Image Multifeature Fusion

Abstract: The recognition of human pose based on machine vision usually results in a low recognition rate, low robustness, and low operating efficiency. That is mainly caused by the complexity of the background, as well as the diversity of human pose, occlusion, and selfocclusion. To solve this problem, a feature extraction method combining directional gradient of depth feature (DGoD) and local difference of depth feature (LDoD) is proposed in this paper, which uses a novel strategy that incorporates eight neighborhood … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 45 publications
0
1
0
Order By: Relevance
“…Some recent work has explored the use of multimodal fusion for detection and segmentation tasks in autonomous driving [17,20,21]. Andreas Eitel introduced a multistage training method that effectively encodes the depth information of CNN [22], so that learning does not require large depth datasets, through the data enhancement scheme of robust learning of the depth image, it is corroded with the real noise mode [23]. Hyunggi Cho et al [20] redesigned the sensor configuration and installed multiple LiDAR pairs and vision sensors.…”
Section: Lane Line Segmentationmentioning
confidence: 99%
“…Some recent work has explored the use of multimodal fusion for detection and segmentation tasks in autonomous driving [17,20,21]. Andreas Eitel introduced a multistage training method that effectively encodes the depth information of CNN [22], so that learning does not require large depth datasets, through the data enhancement scheme of robust learning of the depth image, it is corroded with the real noise mode [23]. Hyunggi Cho et al [20] redesigned the sensor configuration and installed multiple LiDAR pairs and vision sensors.…”
Section: Lane Line Segmentationmentioning
confidence: 99%