This work proposes a human motion prediction model for handover operations. We use in this work, the different phases of the handover operation to improve the human motion predictions. Our attention deep learning based model takes into account the position of the robot's End Effector and the phase in the handover operation to predict future human poses. Our model outputs a distribution of possible positions rather than one deterministic position, a key feature in order to allow robots to collaborate with humans.The attention deep learning based model has been trained and evaluated with a dataset created using human volunteers and an anthropomorphic robot, simulating handover operations where the robot is the giver and the human the receiver. For each operation, the human skeleton is obtained with an Intel RealSense D435i camera attached inside the robot's head. The results shown a great improvement of the human's right hand prediction and 3D body compared with other methods.
In this work we present a novel attention deep learning model that uses context and human intention for 3D human body motion prediction in hand-over human-robot tasks. This model uses a multi-head attention architecture which incorporates as inputs the human motion, the robot end effector and the position of the obstacles. The outputs of the model are the predicted motion of the human body and the predicted human intention. We use this model to analyze a hand-over collaborative task with a robot where the robot is able to predict the future motion of the human and use this information in it's planner. We perform several experiments and ask the human volunteers to fill a standard poll to rate different features of the task when the robot uses the prediction versus when the robot doesn't use the prediction.
In this work, we propose a gesture-based language to allow humans to interact with robots using their body in a natural way. We have created a new gesture detection model using neural networks and a new dataset of humans making a collection of body gestures to train this architecture. Furthermore, we compare body gesture communication with other communication channels to demonstrate the importance of adding this knowledge to robots. The presented approach is validated in diverse simulations and real-life experiments with non-trained volunteers. This attains promising results and establishes that it is a valuable framework for social robotic applications, such as human robot collaboration or human-robot interaction.
We present a new social robot named IVO, a robot capable of collaborating with humans and solving different tasks. The robot is intended to cooperate and work with humans in a useful and socially acceptable manner to serve as a research platform for long-term Social Human-Robot Interaction. In this paper, we proceed to describe this new platform, its communication skills and the current capabilities the robot possesses, such as, handing over an object to or from a person or performing guiding tasks with a human through physical contact. We describe the social abilities of the IVO robot, furthermore, we present the experiments performed for each robot's capacity using its current version.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.