2020
DOI: 10.1007/978-3-030-50726-8_27
|View full text |Cite
|
Sign up to set email alerts
|

Recognition and Localisation of Pointing Gestures Using a RGB-D Camera

Abstract: Non-verbal communication is part of our regular conversation, and multiple gestures are used to exchange information. Among those gestures, pointing is the most important one. If such gestures cannot be perceived by other team members, e.g. by blind and visually impaired people (BVIP), they lack important information and can hardly participate in a lively workflow. Thus, this paper describes a system for detecting such pointing gestures to provide input for suitable output modalities to BVIP. Our system employ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0
2

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

4
3

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 11 publications
0
6
0
2
Order By: Relevance
“…The pointing gesture recognition system [2] uses a Kinect v2 sensor. The sensor data is given to ROS 1 (Robot Operation System) and analyzed by OpenPTrack [1] to get the joint coordinates of the pointing arm.…”
Section: Pointing Gesture Recognition Systemmentioning
confidence: 99%
“…The pointing gesture recognition system [2] uses a Kinect v2 sensor. The sensor data is given to ROS 1 (Robot Operation System) and analyzed by OpenPTrack [1] to get the joint coordinates of the pointing arm.…”
Section: Pointing Gesture Recognition Systemmentioning
confidence: 99%
“…One part of the MAPVI project and this software suite is the inclusion of gesture recognition. So far pointing gestures of participants can be recognized [9]. The brainstorming tool allows these pointing gestures to be fed into the system and shared with BVIP.…”
Section: Recognition Of Nvcmentioning
confidence: 99%
“…Eye gaze provides information on emotional state (Bal et al, 2010), text entry (Majaranta and Räihä, 2007), or concentration for an object (Symons et al, 2004) given by the user, to infer visualization tasks and a user's cognitive abilities (Steichen et al, 2013), to enhance interaction (Hennessey et al, 2014), to have communication via eye gaze patterns (Qvarfordt and Zhai, 2005), etc. However, such information cannot be accessed by blind and visually impaired people (BVIP) as they cannot see where the other person in the meeting room is looking at (Dhingra and Kunz, 2019;Dhingra et al, 2020). Therefore, it is important to track eye gaze in the meeting environment to provide the relevant information to them.…”
Section: Introductionmentioning
confidence: 99%