Increasingly autonomous robotic systems are expected to play a vital role in aiding humans in complex and dangerous environments. It is unlikely, however, that such systems will be able to consistently operate with perfect reliability. Even less than 100% reliable systems can provide a significant benefit to humans, but this benefit will depend on a human operator's ability to understand a robot's behaviors and states. The notion of system transparency is examined as a vital aspect of robotic design, for maintaining humans' trust in and reliance on increasingly automated platforms. System transparency is described as the degree to which a system's action, or the intention of an action, is apparent to human operators and/or observers. While the physical designs of robotic systems have been demonstrated to greatly influence humans' impressions of robots, determinants of transparency between humans and robots are not solely robot-centric. Our approach considers transparency as emergent property of the human-robot system. In this paper, we present insights from our interdisciplinary efforts to improve the transparency of teams made up of humans and unmanned robots. These nearfuturistic teams are those in which robot agents will autonomously collaborate with humans to achieve task goals. This paper demonstrates how factors such as human-robot communication and human mental models regarding robots impact a human's ability to recognize the actions or states of an automated system. Furthermore, we will discuss the implications of system transparency on other critical HRI factors such as situation awareness, operator workload, and perceptions of trust.
A transition in robotics from tools to teammates is underway, but, because it is in an early state, experience with intelligent robots and agents is limited. As such, human mental models of intelligent robots are primitive, easily influenced by superficial characteristics, and often incomplete or inaccurate. This paper investigates the factors that influence mental models of robots, and explores solutions for the formation of accurate and useful mental models with a specific focus on military applications. Humans must possess a clear and accurate understanding of how robots communicate and operate, particularly in military settings where intelligent, autonomous robotic agents are desired. Complete and accurate mental models in these hazardous and critical applications will reduce the inherent danger of automation disuse or misuse. Implications for training and developing appropriate trust are also discussed.
A fundamental aspect of human-robot interaction is the ability to generate expectations for the decisions of one's teammate(s) in order to coordinate plans of actions. Cognitive models provide a promising approach by allowing both a robot to model a human teammate's decision process as well as by modeling the process through which a human develops expectations regarding its robot partner's actions. We describe a general cognitive model developed using the ACT-R cognitive architecture that can apply to any situation that could be formalized using decision trees expressed in the form of instructions for the model to execute. The model is composed of three general components: instructions on how to perform the task, situational knowledge, and past decision instances. The model is trained using decision instances from a human expert, and its performance is compared to that of the expert.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.