The proliferation of “Deep fake” technologies, particularly those facilitating face-swapping in images or videos, poses significant challenges and opportunities in digital media manipulation. Despite considerable advancements, existing methodologies often struggle with maintaining visual coherence, especially in preserving background features and ensuring the realistic integration of identity traits. This study introduces a novel face replacement model that leverages a singular framework to address these issues, employing the Adaptive Attentional Denormalization mechanism from FaceShifter and integrating identity features via ArcFace and BiSeNet for enhanced attribute extraction. Key to our approach is the utilization of Fast GAN, optimizing the training efficiency of our model on relatively small datasets. We demonstrate the model’s efficacy in generating convincing face swaps with high fidelity, showcasing a significant improvement in blending identities seamlessly with the original background context. Our findings contribute to visual deepfake generation by enhancing realism and training efficiency but also highlight the potential for applications where authentic visual representation is crucial.