If robots are to be introduced into the human world as assistants to aid a person in the completion of a manual task two key problems of today's robots must be solved. The human-robot interface must be intuitive to use and the safety of the user with respect to injuries inflicted by collisions with the robot must be guaranteed. In this paper we describe the formulation and implementation of a control strategy for robot manipulators which provides quantitative safety guarantees for the user of assistant-type robots. We propose a control scheme for robot manipulators that restricts the torque commands of a position control algorithm to values that comply to preset safety restrictions. These safety restrictions limit the potential impact force of the robot in the case of a collision with a person. Such accidental collisions may occur with any part of the robot and therefore the impact force not only of the robot's hand but of all surfaces is controlled by the scheme. The integration of a visual control interface and the safely controlled robot allows the safe and intuitive interaction between a person and the robot. As an example application, the system is programmed to retrieve eye-gaze-selected objects from a table and to hand them over to the user on demand.
If robots are to be introduced into the human world as assistants to aid a person in the completion of a manual task two key problems of today's robots must be solved. The human-robot interface must be intuitive to use and the safety of the user with respect to injuries inflicted by collisions with the robot must be guaranteed. In this paper we describe the formulation and implementation of a control strategy for robot manipulators which provides quantitative safety guarantees for the user of assistant-type robots.We propose a control scheme for robot manipulators that restricts the torque commands of a position control algorithm to values that comply to preset safety restrictions. These safety restrictions limit the potential impact force of the robot in the case of a collision with a person. Such accidental collisions may occur with any part of the robot and therefore the impact force not only of the robot's hand but of all surfaces is controlled by the scheme.The integration of a visual control interface and the safely controlled robot allows the safe and intuitive interaction between a person and the robot. As an example application, the system is programmed to retrieve eye-gaze-selected objects from a table and to hand them over to the user on demand.
People naturally express themselves through facial gestures and expressions. Our goal is to build a facial gesture human-computer interface for use in robot applications. We have implemented an interface that tracks a person's facial features in red time (30Hz). Our system does not require special illumination nor facial makeup. By using multiple Kalman filters we accurately predict and robustly track facial features. This is despite disturbances and rapid movements of the head (including both translational and rotational motion). Since we reliably track the face in real-time we are also able to recognise motion gestures of the face. Our system can recognise a large set of gestures (13) ranging from "yes", "no" and
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.