Federated Learning (FL) heralds a paradigm shift in the training of artificial intelligence (AI) models by fostering collaborative model training while safeguarding client data privacy. In sectors where data sensitivity and AI model security are of paramount importance, such as fintech and biomedicine, maintaining the utility of models without compromising privacy is crucial with the growing application of artificial intelligence technologies. Therefore, the adoption of FL is attracting significant attention. However, traditional Federated Learning methods are vulnerable to Deep Leakage from Gradients (DLG) attacks, and typical defensive strategies often result in excessive computational costs or substantial decreases in model accuracy. To navigate these challenges, this research introduces VAEFL, an innovative FL framework that incorporates Variational Autoencoders (VAEs) to bolster privacy protection without undermining the predictive prowess of the models. VAEFL strategically partitions the model into a private encoder and a public decoder. The private encoder, remaining local, transmutes sensitive data into a latent space fortified for privacy, while the public decoder and classifier, through collaborative training across clients, learn to derive precise predictions from the encoded data. This bifurcation ensures that sensitive data attributes are not disclosed, circumventing gradient leakage attacks and simultaneously allowing the global model to benefit from the diverse knowledge of client datasets. Comprehensive experiments demonstrate that VAEFL not only surpasses standard FL benchmarks in privacy preservation but also maintains competitive performance in predictive tasks. VAEFL thus establishes a novel equilibrium between data privacy and model utility, offering a secure and efficient federated learning approach for the sensitive application of FL in the financial domain.