In order to navigate safely and effectively with humans in close proximity, robots must be capable of predicting the future motions of humans. This study first consolidates human studies in motion, intention, and preference into a discretized human model that can readily be used in robotics decision making algorithms. Cooperative Markov Decision Process (Co-MDP), a novel framework that improves upon Multiagent MDPs, is then proposed for enabling socially aware robot obstacle avoidance. Utilizing the consolidated and discretized human model, Co-MDP allows the system to (1) approximate rational human behavior and intention, (2) generate socially-aware robotic obstacle avoidance behavior, and (3) remain robust to the uncertainty of human intention and motion variance. Simulations of a human-robot co-populated environment verify Co-MDP as a feasible obstacle avoidance algorithm. In addition, the anthropomorphic behavior of Co-MDP was assessed and confirmed with a human-in-the-loop experiment. Results reveal that participants can not directly differentiate agents that were controlled by human operators from Co-MDP, and the reported confidences of their choices indicates that the predictions from participants were backed by behavioral evidence rather than random guesses. Thus the main contributions for this paper are: consolidating past human studies of rational human behavior and intention into a simple, discretized model; the development of Co-MDP: a robotic decision framework that can utilize this human model and maximize the joint utility between the human and robot; and an experimental design for evaluation of the human acceptance of obstacle avoidance algorithms.
To ensure both the physical and mental safety of humans during human-robot interaction (HRI), a rich body of literature has been accumulated, and the notion of socially acceptable robot behaviors has arisen. To be specific, it requires the motion of robots not only to be physically collision-free but also to consider and respect the social conventions developed and enforced in the human social contexts. Among these social conventions, personal space, or proxemics, is one of the most commonly considered in the robot behavioral design. Nevertheless, most previous research efforts assumed that robots could generate human-like motions by merely mimicking a human. Rarely are the robot’s behavioral algorithms assessed and verified by human participants. Therefore, to fill the research gap, a Turing-like simulation test, which contains the interaction of two agents (each agent could be a human or a robot) in a shared space was conducted. Participants (33 in total) were asked to identify and label the category of those agents followed by questionnaires. Results revealed that people who had different attitudes and prior expectations of appropriate robot behaviors responded to the algorithm differently, and their identification accuracy varied significantly. In general, by considering personal space in the robot obstacle avoidance algorithm, robots could demonstrate more humanlike motion behaviors which are confirmed by human experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.