Generative AI (GAI) technologies have demonstrated human‐level performance on a vast spectrum of tasks. However, recent studies have also delved into the potential threats and vulnerabilities posed by GAI, particularly as they become increasingly prevalent in sensitive domains such as elections and education. Their use in politics raises concerns about manipulation and misinformation. Further exploration is imperative to comprehend the social risks associated with GAI across diverse societal contexts. In this panel, we aim to dissect the impact and risks posed by GAI on our social fabric, examining both technological and societal perspectives. Additionally, we will present our latest investigations, including the manipulation of ideologies using large language models (LLMs), the potential risk of AI self‐consciousness, the application of Explainable AI (XAI) to identify patterns of misinformation and mitigate their dissemination, as well as the influence of GAI on the quality of public discourse. These insights will serve as catalysts for stimulating discussions among the audience on this crucial subject matter, and contribute to fostering a deeper understanding of the importance of responsible development and deployment of GAI technologies.