Abstract:The ultimate long-term goal in Human-Robot Interaction (HRI) is to design robots that can act as a natural extension to humans. This requires the design of robot control architectures to provide structure for the integration of the necessary components into HRI. This paper describes how HBBA, a Hybrid Behavior-Based Architecture, can be used as a unifying framework for integrated design of HRI scenarios. More specifically, we focus here on HBBA's generic coordination mechanism of behavior-producing modules, which allows to address a wide range or cognitive capabilities ranging from assisted teleoperation to selective attention and episodic memory. Using IRL-1, a humanoid robot equipped with compliant actuators for motion and manipulation, proximity sensors, cameras and a microphone array, three interaction scenarios are implemented: multi-modal teleoperation with physical guidance interaction, fetching-anddelivering and tour-guiding.
Commercial telepresence robots provide video, audio, and proximity data to remote operators through a teleoperation user interface running on standard computing devices. As new modalities such as force sensing and sound localization are being developed and tested on advanced robotic platforms, ways to integrate such information on a teleoperation interface are required. This paper demonstrates the use of visual representations of forces and sound localization in a 3D teleoperation interface. Forces are represented using colors, size, bar graphs and arrows, while speech or ring bubbles are used to represents sound positions and types. Validation of these modalities is done with 31 participants using IRL-1/TR, a humanoid platform equipped with differential elastic actuators to provide compliance and force control of its arms and capable of sound source localization. Results suggest that visual representations of interaction force and sound source can provide appropriately useful information to remote operators.
Representing the location of sound sources may be helpful when teleoperating a mobile robot. To evaluate this modality, we conducted trials in which the graphical user interface (GUI) displays a blue right icon on the video stream where the sound is located. Results show that such visualization modality provides a clear benefit when a user has to distinguish between multiple sound sources.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.