2022
DOI: 10.1109/access.2022.3218679
|View full text |Cite
|
Sign up to set email alerts
|

3D Gesture Recognition and Adaptation for Human–Robot Interaction

Abstract: Gesture-based human-robot interaction has been an important area of research in recent years. The primary aspect for the researchers has always been to create a gesture detection system that is insensitive to lighting and backdrop surroundings. This research proposes a 3D gesture recognition and adaption system based on Kinect for human-robot interaction. The framework consists of four modules, i.e., pointing gesture recognition, 3D dynamic gesture recognition, gesture adaptation, and robot navigation. The pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 32 publications
(38 reference statements)
0
4
0
Order By: Relevance
“…Stergios Poularakis et al [ 23 ] proposed a complete gesture recognition system based on maximum cosine similarity and a fast nearest neighbor technique, which captures the user’s gesture by a camera and translates the gesture into the corresponding command with simplicity, accuracy, and low complexity. Jubayer Al Mahmudet al [ 24 ] proposed a 3D gesture recognition and adaption system based on Kinect for human-robot interaction. Harshala Gammulle et al [ 25 ] proposed a single-stage continuous gesture recognition framework, called temporal multimodal fusion (TMMF), which enables the detection and classification of multiple gestures in videos through a single model.…”
Section: Related Workmentioning
confidence: 99%
“…Stergios Poularakis et al [ 23 ] proposed a complete gesture recognition system based on maximum cosine similarity and a fast nearest neighbor technique, which captures the user’s gesture by a camera and translates the gesture into the corresponding command with simplicity, accuracy, and low complexity. Jubayer Al Mahmudet al [ 24 ] proposed a 3D gesture recognition and adaption system based on Kinect for human-robot interaction. Harshala Gammulle et al [ 25 ] proposed a single-stage continuous gesture recognition framework, called temporal multimodal fusion (TMMF), which enables the detection and classification of multiple gestures in videos through a single model.…”
Section: Related Workmentioning
confidence: 99%
“…According to Jubayer Al Mahmud [9], the primary objective of this study is to develop a method for controlling mobile robots using pointing and other dynamic command gestures, regardless of the illumination or terrain. Since Kinect is an effective tool for monitoring people's bodies and obtaining joint dimensions in a three dimensional space, it has been selected as the detector to collect gesture instances.…”
Section: Related Workmentioning
confidence: 99%
“…Dynamic gesture recognition uses HMM,Multiclass SVM,and CNN. Adaptation uses user-consent-based or semi-supervised methods [5] . Javier Laplaza et al developed gesture-based language using neural networks to detect body motions for humans to interact with robots naturally [6] .…”
Section: Introductionmentioning
confidence: 99%