Driving assistant systems are becoming the attractive service tasks in the field of intelligent robotics. Humans meet a variety of situations while driving cars and robotic systems help humans to understand how surrounding situations change and how robot systems are aware of given situations. From the viewpoint of interaction performance, proper situation awareness by a robotic assistant in a car and the relevant determination of corresponding reactions are crucial prerequisites for long term interaction between a human driver and a car's robotic system. In this paper, we focus on preserving human-robot interaction for driving situations, considering how many types of cognitive situation occur and how affective interaction can be designed for the robotic driving assistant. The appropriateness of the driving situation and of the robotic reactions are testified in our experiment system, including the development of a virtual driving environment and tablet-based robotic agent system.
There have been many attempts made by companies and researchers to make people feel more familiar with robots. As one example, Dance performance of Entertainment Robot could conceivably provide an event that would be seen easily in a public place. In most cases, however, programmers' and researchers' workload, if they were to attempt to program dance motions for a robot and synchronize them, would be too heavy. In addition, pre-programmed dance motion and synchronization information are useful only for one certain music and are useless for any other music input. To solve these problems, we have introduced a new system that can make a robot dance automatically with real-time music input. The system consists of two main parts: the first is a real-time beat extraction system for music; the second one is a system of dance motion for a humanoid robot. In the first part, music input is analyzed using FFT and peak-to-peak time duration in the low frequency area is computed. In the second part, the process of making and synchronizing the dance motion of the humanoid robot is described. Finally, we perform experiments to check the validity of the proposed system. Also, limitations and further work are described.
In order to achieve effective interaction between humans and robots, user interface system is one of important factors. To achieve a valid and appropriate user interface system, we have introduced a new system that consists of a beam projector that can display images for interacting with users and a laser scanner that can detect user's foot motion so that the system recognizes user's input and intention. In this paper, we have focused on how to cause projected images to be rectangular so that users can interact with ordinary rectangular images instead of trapezoidal images that will be produced if the beam projector attached on the robot projects images sideling and how to distinguish user's foot motions according to two classifications, drag motion and click motion. Finally, we have executed a test for checking the validity of the proposed system and have examined limitations and proposed further work for this system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.