a) Mask2image (b) Component editing (c) Component transfer Figure 1: We propose a framework based on conditional GANs for mask-guided portrait editing. (a) Our framework can generate diverse and realistic faces using one input target mask (lower left corner in the first image). (b) Our framework allows us to edit the mask to change the shape of face components, i.e. mouth, eyes, hair. (c) Our framework also allows us to transfer the appearance of each component for a portrait, including hair color.
AbstractPortrait editing is a popular subject in photo manipulation. The Generative Adversarial Network (GAN) advances the generating of realistic faces and allows more face editing. In this paper, we argue about three issues in existing techniques: diversity, quality, and controllability for portrait synthesis and editing. To address these issues, we propose a novel end-to-end learning framework that leverages conditional GANs guided by provided face masks for generating faces. The framework learns feature embeddings for every face component (e.g., mouth, hair, eye), separately, contributing to better correspondences for image translation, and local face editing. With the mask, our network is available to many applications, like face synthesis driven by mask, face Swap+ (including hair in swapping), and local manipulation. It can also boost the performance of face parsing a bit as an option of data augmentation.