2011 IEEE/SICE International Symposium on System Integration (SII) 2011
DOI: 10.1109/sii.2011.6147623
|View full text |Cite
|
Sign up to set email alerts
|

Tracking with pointing gesture recognition for human-robot interaction

Abstract: In the field of human-robot interaction, it is the key problem for the robot to know the information of the interactive partner. Without obtaining these information the robot couldn't realize what is the partner means. Currently most of the interactive robots have vision sensors to catch human face, and then perform some reactions to the interactive partner. But it could only access under strict conditions because when the partner get out of the sight, the robot couldn't know where to find him. So it is import… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 4 publications
0
4
0
Order By: Relevance
“…There are numerous researches for the hand sign, hand gesture and shape recognition by thinning method, contour & curvature, and convex hull based on outstretched hand [19]. In this paper, we choose the distance transform method plus polygonal approximation to obtain the hand's skeleton and then recognize the hand gesture.…”
Section: A Hand Skeleton Recognizermentioning
confidence: 99%
“…There are numerous researches for the hand sign, hand gesture and shape recognition by thinning method, contour & curvature, and convex hull based on outstretched hand [19]. In this paper, we choose the distance transform method plus polygonal approximation to obtain the hand's skeleton and then recognize the hand gesture.…”
Section: A Hand Skeleton Recognizermentioning
confidence: 99%
“…When human interacts with an object by pointing gesture, the pointing direction is estimated by two points on the camera perspective line. According to the different input cameras, pointing gesture recognition technologies can be classified into 3D methods [11][12][13] and 2D methods [14][15][16]. 3D methods rely on specific input devices, such as stereo camera or Kinect, while 2D methods use less expensive and more easily available cameras, with reduced computation cost for 2D information.…”
Section: Introductionmentioning
confidence: 99%
“…The second challenge is to estimate the pointing position determined by intersection between the interaction plane and the pointing vector due to lack of depth information. In order to solve this problem, some interaction systems based on 2D pointing gesture methods only utilize information of pointing direction instead of pointing position [14,15], and some other systems require users to make coordinate calibration before operating [16]. In this paper, we propose an edge repair-based hand subpart segmentation algorithm, which accurately and effectively segments the palm and finger regions from the background by using 2D information.…”
Section: Introductionmentioning
confidence: 99%
“…In past years, skin color detection has been regarded as an essential component in human-robot interaction systems for analyzing human intentions (Kim et al 2006;Luo et al 2011). This methodology had become popular because it requires a lower computing effort than other image processing approaches even though skin color analysis is a fundamental area of pattern recognition.…”
Section: Introductionmentioning
confidence: 99%