The ability of robots to autonomously perform tasks is increasing. More autonomy in robots means that the human managing the robot may have available free time. It is desirable to use this free time productively, and a current trend is to use this available free time to manage multiple robots. We present the notion of neglect tolerance as a means for determining how robot autonomy and interface design determine how free time can be used to support multitasking, in general, and multirobot teams, in particular. We use neglect tolerance to 1) identify the maximum number of robots that can be managed; 2) identify feasible configurations of multirobot teams; and 3) predict performance of multirobot teams under certain independence assumptions. We present a measurement methodology, based on a secondary task paradigm, for obtaining neglect tolerance values that allow a human to balance workload with robot performance.
Navigation is an essential element of many remote robot operations including search and rescue, reconnaissance, and space exploration. Previous reports on using remote mobile robots suggest that navigation is difficult due to poor situation awareness. It has been recommended by experts in human-robot interaction that interfaces between humans and robots provide more spatial information and better situational context in order to improve an operator's situation awareness. This paper presents an ecological interface paradigm that combines video, map, and robotpose information into a 3-D mixed-reality display. The ecological paradigm is validated in planar worlds by comparing it against the standard interface paradigm in a series of simulated and realworld user studies. Based on the experiment results, observations in the literature, and working hypotheses, we present a series of principles for presenting information to an operator of a remote robot.
Abstract-Most interfaces for robot control have focused on providing users with the most current information and giving status messages about what the robot is doing. While this may work for people that are already experienced in robotics, we need an alternative paradigm for enabling new users to control robots effectively. Instead of approaching the problem as an issue of what information could be useful, the focus should be on presenting essential information in an intuitive way. One way to do this is to leverage perceptual cues that people are accustomed to using. By displaying information in such contexts, people are able to understand and use the interface more effectively. This paper presents interfaces which allow users to navigate in 3-D worlds with integrated range and camera information.
One of the fundamental aspects of robot teleoperation is the ability to successfully navigate a robot through an environment. We define successful navigation to mean that the robot minimizes collisions and arrives at the destination in a timely manner. Often video and map information is presented to a robot operator to aid in navigation tasks. This paper addresses the usefulness of map and video information in a navigation task by comparing a side-by-side (2D) representation and an integrated (3D) representation in both a simulated and a real world study. The results suggest that sometimes video is more helpful than a map and other times a map is more helpful than video. From a design perspective, an integrated representation seems to help navigation more than placing map and video side-by-side.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.