The rapid progress of generative AI models has yielded substantial breakthroughs in AI, facilitating the generation of realistic synthetic data across various modalities. However, these advancements also introduce significant privacy risks, as the models may inadvertently expose sensitive information from their training data. Currently, there is no comprehensive survey work investigating privacy issues, e.g., attacking and defending privacy in generative AI models. We strive to identify existing attack techniques and mitigation strategies and to offer a summary of the current research landscape. Our survey encompasses a wide array of generative AI models, including language models, Generative Adversarial Networks, diffusion models, and their multi-modal counterparts. It indicates the critical need for continued research and development in privacy-preserving techniques for generative AI models. Furthermore, we offer insights into the challenges and discuss the open problems in the intersection of privacy and generative AI models.