“…The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others. We discuss the results, evaluate imbalances in the literature, and explore unsubstantiated risk scenarios.Keywords -generative artificial intelligence, large language models, image generation models, ethics being inefficient, useless, or whitewashing [11][12][13][14] ; it became increasingly transferred into proposed legal norms like the AI Act of the European Union 15,16 ; and it became accompanied by two new fields dealing with technical and theoretical issues alike, namely AI alignment and AI safety 17,18 . Both domains have a normative grounding and are devoted to preventing harm or even existential risks stemming from generative AI systems.On the technical side of things, variational autoencoders 19 , flow-based generative models 20,21 , or generative adversarial networks 22 were early successful generative models, supplementing discriminatory machine learning architectures.…”