Thanks for its powerful feature extraction capability and unprecedented success in computer vision and language processing tasks, deep learning (DL) has also been applied in recent years to wireless communications. However, DL models are proved to be inherently vulnerable to adversarial perturbations: carefully crafted perturbations that appear imperceptible but can fool the models to induce misclassification. Existing researches in wireless communications employ the gradient-based or optimization based methods, for example, FGSM, PGD and iterative FGSM, to craft input-dependent adversarial perturbations. However, these methods have a high computational complexity and the perturbations they craft cannot be readily applied to fool DL-based algorithms in wireless communications. In this paper, the efficient generation of universal (input-agnostic) adversarial perturbations (UAPs) used to attack DL-based modulation classification algorithms is studied. Communication signals exhibit unique characteristics, for example, received modulated signals after transmitting through noisy channels are densely distributed around the modulation constellation points. In this work, the authors have proposed, by making use of this unique characteristics, a generative network to model the distribution of adversarial perturbations for high-efficiency generation of UAPs, which can fool the DL-based modulation classification algorithms for most inputs. Instead of attacking all regions of modulated signals, attention mechanisms are introduced in generative network design that enables the perturbations to be concentrated around the constellation points. In this way, a higher attack efficiency and a lower perceptibility are achieved by crafting UAPs with smooth variations and more focused attack. In addition, a diversity term is included in loss function to help capture a wide range distribution of adversary perturbations that will help the generation of more diverse adversary perturbations. Once trained, the generative network can construct with ease a large number of UAPs with some favorable features, which will benefit more efficient adversarial attack and faster adversarial training. Experimental results show the superiority of our proposed method in terms of attack efficiency, imperceptibility and diversity over existing methods.