2020 17th International Bhurban Conference on Applied Sciences and Technology (IBCAST) 2020
DOI: 10.1109/ibcast47879.2020.9044545
|View full text |Cite
|
Sign up to set email alerts
|

RGB-D Images for Object Segmentation, Localization and Recognition in Indoor Scenes using Feature Descriptor and Hough Voting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7
2

Relationship

3
6

Authors

Journals

citations
Cited by 48 publications
(20 citation statements)
references
References 36 publications
0
20
0
Order By: Relevance
“…For robust identification of HIR, actual human interaction areas need to be extracted and to distinguish target images from clutter [ 64 ]. To extract efficient silhouette representation, we depend mainly on connected components, skin tone, region growing and color spacing [ 65 ]. Various algorithms are used for both RGB and RGB-D silhouette segmentation to improve the performance of the proposed system.…”
Section: Proposed System Methodologymentioning
confidence: 99%
“…For robust identification of HIR, actual human interaction areas need to be extracted and to distinguish target images from clutter [ 64 ]. To extract efficient silhouette representation, we depend mainly on connected components, skin tone, region growing and color spacing [ 65 ]. Various algorithms are used for both RGB and RGB-D silhouette segmentation to improve the performance of the proposed system.…”
Section: Proposed System Methodologymentioning
confidence: 99%
“…After that, in order to remove noise from the image, a median filter is applied in which pixels are replaced by a median of neighboring pixels [66,67]. The most important step in any HAR system is to define and mine Regions of Interest (ROI) [68]. In our work, an ROI consists of two persons involved in an interaction in RGB-D images.…”
Section: Foreground Extractionmentioning
confidence: 99%
“…To date, most research on this field has focused on single modality approaches, which may consist of either RGB [ 12 ] or RGB-D videos [ 13 ], wearables such as inertial sensors (Inertial Measurement Units—IMUs) [ 14 ], or ambient sensors [ 15 ]. The scenarios in which each of these modalities have been employed for activity recognition vary according to the availability of data, which may be constrained by technical or ethical limitations.…”
Section: Introductionmentioning
confidence: 99%