2015
DOI: 10.1109/thms.2015.2461683
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Interaction Modalities for Mobile Indoor Robot Guidance: Direct Physical Interaction, Person Following, and Pointing Control

Abstract: Abstract-Three advanced natural interaction modalities for mobile robot guidance in an indoor environment were developed and compared using two tasks and quantitative metrics to measure performance and workload. The first interaction modality is based on direct physical interaction requiring the human user to push the robot in order to displace it. The second and third interaction modalities exploit a 3D vision-based human-skeleton tracking allowing the user to guide the robot by either walking in front of it … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 51 publications
(21 citation statements)
references
References 84 publications
0
21
0
Order By: Relevance
“…Various pointing recognition methods have been proposed in literature, which were tailored according to system's sensing abilities, e.g., finger tracking [26], or task requirements, e.g., distance of the pointing target [27]. Our previous studies showed that the pointing recognition using the position of the elbow and wrist joints can successfully be applied to robot control in close human-robot interaction [28], [29]. The user-tracking algorithm described in Section II-C3 provides the position of the arm joints in real- …”
Section: Algorithmsmentioning
confidence: 99%
“…Various pointing recognition methods have been proposed in literature, which were tailored according to system's sensing abilities, e.g., finger tracking [26], or task requirements, e.g., distance of the pointing target [27]. Our previous studies showed that the pointing recognition using the position of the elbow and wrist joints can successfully be applied to robot control in close human-robot interaction [28], [29]. The user-tracking algorithm described in Section II-C3 provides the position of the arm joints in real- …”
Section: Algorithmsmentioning
confidence: 99%
“…Numerous methods for gesture recognition have been proposed in literature and they were based on statistical modelling, computer vision, pattern recognition, and so forth [12]. Release of affordable RGB-D cameras such as the Microsoft Kinect allowed development of novel algorithms for human body segmentation [13], which improved the performance of the state-of-the-art gesture recognition methods, also for robotics applications [14]. In the current work, a hands-tracking algorithm was applied to robot manipulators teleoperation and was combined with RL to develop a system that is able to learn from user intervention.…”
Section: Relevant Workmentioning
confidence: 99%
“…Various methods for gesture recognition from RGB cameras have been proposed in literature [12]. In our previous work, we showed that the recognition of pointing gestures from depth images can be applied to real-time mobile robot guidance [13]. Here, the concept of visual robot guidance is implemented on robot manipulators and a Reinforcement Learning (RL) algorithm is applied to produce an improved robot motion segment from a series of user interventions.…”
Section: Relevant Workmentioning
confidence: 99%
“…Participants completed the raw NASA-TLX questionnaire after each experiment. The questionnaire enables the collection of six dimensions of workload ranging from 0 to 100, and was used to assess the overall participant workload when performing the experiments, similarly as in [13]. The overall workload is computed as the average of the above-mentioned six dimensions.…”
Section: Performance and Workload Measuresmentioning
confidence: 99%