Existing methods for generating virtual character videos focus on improving either appearance or motion. However, achieving both photo-and motion-realistic characters is critical in real services. To address both aspects, we propose Fake to Real Portrait Control (F2RPC), a unified framework for image destylization and face reenactment. F2RPC employs a blind face restoration model to circumvent GAN inversion limitations, such as identity loss and alignment sensitivity, while preserving GAN's generation quality. This framework includes two novel sub-modules, AdaGPEN for destylization and PCGPEN for reenactment, both leveraging the same restoration model as a backbone. AdaGPEN exploits GAN prior of the restoration model via blending features from the original and its blurred image using the AdaMix block. PCGPEN reenacts the input image to follow the input motion condition via flow-based feature editing. These components function in an end-to-end manner, enhancing efficiency and lowering computational overhead. We evaluate F2RPC using synthetic character dataset and high-resolution talking face datasets for destylization and reenactment, respectively. The results show that F2RPC outperforms the combined use of state-of-the-art methods for destylization (i.e., DualStyleGAN) and reenactment (i.e., StyleHEAT). F2RPC improves the FID by 26.4% and preserves identity similarity by 95% more at a resolution of 512 × 512 video. This evidences F2RPC's efficacy and superiority in the photo-and motion-realistic virtual character video generation task.INDEX TERMS Face morphing, facial reenactment, generative adversarial networks, virtual character destylization.