We are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose a new method named adversarial domain augmentation to solve this Outof-Distribution (OOD) generalization problem. The key idea is to leverage adversarial training to create "fictitious" yet "challenging" populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case constraint. Detailed theoretical analysis is provided to testify our formulation, while extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.
Facial expression transfer from a single image is a challenging task and has drawn sustained attention in the fields of computer vision and computer graphics. Recently, generative adversarial nets (GANs) have provided a new approach to facial expression transfer from a single image toward target facial expressions.However, it is still difficult to obtain a sequence of smoothly changed facial expressions. We present a novel GAN-based method for generating emotional facial expression animations given a single image and several facial landmarks for the in-between stages. In particular, landmarks of other subjects are incorporated into a GAN model to control the generated facial expression from a latent space. With the trained model, high-quality face images and a smoothly changed facial expression sequence can be effectively obtained, which are showed qualitatively and quantitatively in our experiments on the Multi-PIE and CK+ data sets.
KEYWORDSdynamic expression sequence, facial expression transfer, generative adversarial nets *These authors contributed equally to this work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.