The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.
Anthropometric measures build the basis for many applications, such as custom clothing or biometric identity verification. Consequentially, the possibility to automatically extract them from human body scans is of high importance. In this paper we present a new approach based on landmarks and template registration. First, we propose a new method to define anthropometric measures once on a generic template using landmarks. After the initial definition the template can be registered against an individual body scan and the landmarks can be transferred to the scan using our second proposed algorithm. We apply our complete approach to real and synthetic human data and show that it outperforms the state-of-the-art for several measures.
Phase unwrapping remains a challenging problem in the context of fast 3D reconstruction based on structured light, in particular for objects with complex geometry. In this paper we suggest to support phase unwrapping algorithms by additional constraints induced by the scanning setup. This is possible when at least two cameras are used, a likely case in practice. The constraints are generalized for two or more cameras by introducing the concept of a candidate map. We claim that this greatly reduces the complexity for any subsequent unwrapping algorithm, their performance is thereby strongly increased. We demonstrate this by exemplarily integrating the candidate map into a local path following and a global minimum norm unwrapping method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.