Images shared on social networks often contain a large amount of private information. Bad actors can use deep learning technology to analyze private information from these images, thus causing user privacy leakage. To protect the privacy of users, reversible adversarial examples (RAEs) are proposed, and they may keep malignant models from accessing the image data while ensuring that the authorized model can recover the source data. However, existing RAEs have shortcomings in imperceptibility and attack capability. We utilize frequency domain information to generate RAEs. To improve the attack capability, the RAEs are generated by discarding the discriminant information of the original class and adding specific perturbation information. For imperceptibility, we propose to embed the perturbation in the wavelet domain of the image. Also, we design low-frequency constraints to distribute the perturbations in the high-frequency region and to ensure the similarity between the original examples and RAEs. In addition, the momentum pre-processing method is proposed to ensure that the direction of the gradient is consistent in each iteration by pre-converging the gradient before the formal iteration, thus accelerating the convergence speed of the gradient, which can be applied to the generation process of RAEs to speed up the generation of RAEs. Experimental results on the ImageNet, Caltech-256, and CIFAR-10 datasets show that the proposed method exhibits the best attack capability and visual quality compared with existing RAE generation schemes. The attack success rate and peak signal-to-noise ratio exceed 99% and 42 dB, respectively. In addition, the generated RAEs demonstrate good transferability and robustness.