Facial attribute editing tasks have immense applications in today’s digital world, including virtual makeup, generating faces in the animation and gaming industry, social media face image enhancement and improving face recognition systems. This task can be achieved manually or automatically. Manual facial attribute editing, performed with software such as Adobe Photoshop, is a tedious and time-consuming process that requires an expert person. However, Automatic facial attribute editing tasks that can perform facial attribute editing within a few seconds are achievable using encoder-decoder and deep learning-based generative models, such as conditional Generative Adversarial Networks. In our work, we use different attribute vectors as conditional information to generate desired target images, and encoder-decoder structures incorporate feature transfer units to choose and alter encoder-based features. Later, these encoder features are concatenated with the decoder feature to strengthen the attribute editing ability of the model. For this research, we apply reconstruction loss to preserve other details of a face image except target attributes. Adversarial loss is employed for visually realistic editing and attribute manipulation loss is employed to ensure that the generated image possesses the correct attributes. Furthermore, we adopt the WGAN-GP loss function type to improve training stability and reduce the mode collapse problem that often occurs in GAN. Experiments on the Celebi dataset show that this method produces visually realistic facial attribute edited images with PSNR/SSIM 31.7/0.95 and 89.23 % of average attribute editing accuracy for 13 facial attributes including Bangs, Mustache, Bald, Bushy Eyebrows, Blond Hair, Eyeglasses, Black Hair, Brown Hair, Mouth Slightly Open, Male, No Beard, pale Skin and Young.