Human-like appearance and movement of social robots is important in humanÀrobot interaction. This paper presents the hardware mechanism and software architecture of an incarnate announcing robot system called EveR-1. EveR-1 is a robot platform to implement and test emotional expressions and humanÀrobot interactions. EveR-1 is not bipedal but sits on a chair and communicates information by moving its upper body. The skin of the head and upper body is made of silicon jelly to give a human-like texture. To express human-like emotion, it uses body gestures as well as facial expressions decided by a personality model. EveR-1 performs the role of guidance service in an exhibition and does the oral narration of fairy tales and simple conversation with humans.
Because the internal structure, degree of freedom, skin control position and range of the android face are different, it is very difficult to generate facial expressions by applying existing facial expression generation methods. In addition, facial expressions differ among robots because they are designed subjectively. To address these problems, we developed a system that can automatically generate robot facial expressions by combining an android, a recognizer capable of classifying facial expressions and a genetic algorithm. We have developed two types (older men and young women) of android face robots that can simulate human skin movements. We selected 16 control positions to generate the facial expressions of these robots. The expressions were generated by combining the displacements of 16 motors. A chromosome comprising 16 genes (motor displacements) was generated by applying real-coded genetic algorithms; subsequently, it was used to generate robot facial expressions. To determine the fitness of the generated facial expressions, expression intensity was evaluated through a facial expression recognizer. The proposed system was used to generate six facial expressions (angry, disgust, fear, happy, sad, surprised); the results confirmed that they were more appropriate than manually generated facial expressions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.