2021
DOI: 10.1145/3472223
|View full text |Cite
|
Sign up to set email alerts
|

Methods for Expressing Robot Intent for Human–Robot Collaboration in Shared Workspaces

Abstract: Human–robot collaboration is becoming increasingly common in factories around the world; accordingly, we need to improve the interaction experiences between humans and robots working in these spaces. In this article, we report on a user study that investigated methods for providing information to a person about a robot’s intent to move when working together in a shared workspace through signals provided by the robot. In this case, the workspace was the surface of a tabletop. Our study tested the effectiveness … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 19 publications
(18 citation statements)
references
References 52 publications
0
18
0
Order By: Relevance
“…One of the first major approaches to modelling legibility has been proposed by Dragan et al [6], [7] and has sparked extensive follow up [10], [11], [12], [13], [14], [15], [16]. The work assumes that humans expect robots to move efficiently and that we can model this expectation using a cost function over trajectories (the observer model).…”
Section: A Legibilitymentioning
confidence: 99%
“…One of the first major approaches to modelling legibility has been proposed by Dragan et al [6], [7] and has sparked extensive follow up [10], [11], [12], [13], [14], [15], [16]. The work assumes that humans expect robots to move efficiently and that we can model this expectation using a cost function over trajectories (the observer model).…”
Section: A Legibilitymentioning
confidence: 99%
“…The same is likely to apply to autonomous vehicles and other autonomous systems that require complex interactions with people, where people are familiar with the normal motions associated with specific contexts. [41] This has been tested by e.g. Arntz et al [38], who tested various interfaces, including a text-based interface and mounting simple OK / not OK lights on the robot arm, and found that these basic systems led to mostly increased positivity towards the robot's motion, but did not necessarily increase trust or efficiency.…”
Section: Collaborative Robot Applicationsmentioning
confidence: 99%
“…Unfortunately, research on the interaction between Baxter's facial expressions and human actors is quite scarce although some sources do exist on the matter. Lemasurier et al [41] investigated and compared the use of light emitters and motion clues of a Baxter robot and found that it is primarily light signals in close proximity to the end effector that is most efficiently caught by a human actor. Their variables included two instances where the robot's screen was used, one instance where the eye gaze was the clue and one instance where the actual pan of the "head" or screen was the factor.…”
Section: Figure 1 Baxter Robot (Right) and Its One-armed Little Broth...mentioning
confidence: 99%
See 2 more Smart Citations