One of the great challenges of putting humanoid robots into space is developing cognitive capabilities for the robots with an interface that allows human astronauts to collaborate with the robots as naturally and efficiently as they would with other astronauts. In this joint effort with NASA and the entire Robonaut team we are integrating natural language and gesture understanding, spatial reasoning incorporating such features as human-robot perspective taking, and cognitive model-based understanding to achieve a high level of human-robot interaction. Building greater autonomy into the robot frees the human operator(s) from focusing strictly on the demands of operating the robot, and instead allows the possibility of actively collaborating with the robot to focus on the task at hand. By using shared representations between the human and robot, and enabling the robot to assume the perspectives of the human, the humanoid robot may become a more effective collaborator with a human astronaut for achieving mission objectives in space.
I6789:;<7=96As we develop and deploy advanced humanoid robots such as Robonaut, 1 NASA's robotic astronaut assistant platform, to perform tasks in space in collaboration with human astronauts, we must consider carefully the needs and expectations of the human astronauts in interfacing and working with these humanoid robots. We want to endow the robots with the necessary capabilities for assisting the human astronauts in as efficient a manner as possible. Building greater autonomy into the robot will diminish the human burden for controlling the robot, and making the humanoid robot a much more useful collaborator for achieving mission objectives in space.In this effort we build upon our experience in designing multimodal human-centric interfaces and cognitive models for dynamically autonomous mobile robots. We argue that by building human-like capabilities into Robonaut's cognitive processes, we can achieve a high level of interactivity and collaboration between human astronauts and Robonaut. Some of the necessary components for this cognitive functionality addressed in this paper include use of cognitive architectures, natural language and gesture understanding, and spatial reasoning with human-robot perspective-taking.