Deep Learning as a Service (DLaaS) provides an efficient way to facilitate deep neural networks (DNNs) in various applications. Despite the accuracy of deep learning models, there is a paramount need for robust protocols to protect the privacy and security of data, particularly sensitive information. Currently, It is not a good way to permit cloud models to process such data without proper safeguards, leading to potential privacy breaches. Especially in the training phase of the model, most models are trained by plaintext rather than ciphertext, which will pose potential privacy leakage of data owners. While some researchers have begun to use homomorphically encrypted data for training, most researchers only implement a single pair of keys for encryption and decryption, neglecting the importance of encryption scheme robustness when unique keys are employed by users. To address these issues, this paper presents a practical localized Federated Learning (FL) method named BatchEncryption using Efficient Integer Vector Homomorphic Encryption (EIVHE) for privacy-preserving training and inference phases. BatchEncryption encrypts raw datasets in blocks using different pairs of keys, which holds the diversity and robustness of the model. Our experiments demonstrate that deploying different pairs of keys to train neural networks holds higher accuracy than implementing a single pair of keys, and the proposed method achieves around 97% accuracy on the data encrypted by different pairs of keys on the MNIST dataset.