The large amount of sensitive personal information used in deep learning models has attracted considerable attention for privacy security. Sensitive data may be memorialized or encoded into the parameters or generation of the Wasserstein Generative Adversarial Networks (WGAN), which can be prevented by implementing privacy-preserving algorithms during the parameter training process. Meanwhile, the model is also expected to obtain effective generated results. We propose a vector-valued differential private bilateral alternative (DPBA) algorithm, a novel perturbation method for the training process. The vector-valued Gaussian (VVG) noise involving functional structure information is injected into the WGAN to generate data with privacy protection, and the model is verified to satisfy differential privacy. The bilateral alternative noise can eventually randomly perturb the gradient and generates informative feature-rich samples. The dynamic noise and vector-based perturbation approach ensures privacy strength. After extensive evaluation, our algorithm outperformed state-of-the-art techniques in terms of usability metrics for all validation datasets. The downstream classification accuracy for the generated Modified National Institute of Standards and Technology (Mnist) was 97.04%, whereas that for the Fashion Modified National Institute of Standards and Technology (Fashion-Mnist) dataset was 80.91%. Mnist improved the average accuracy of the neural network classifier by at least 16.81%, and Fashion-Mnist by at least 3.55%. In the multichannel generation tasks, the binary classification accuracy improved by at least 10.4% compared to CelebFaces Attributes (CelebA), and the accuracy of the Street View House Numbers (SVHN) was as high as 86.1%. The perturbation method proved highly resilient to gradient attack recovery under simulated gradient attacks.