Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
ObjectiveThis paper investigates how state-of-the-art generative artificial intelligence (AI) image models represent common psychiatric diagnoses. We offer key lessons derived from these representations to inform clinicians, researchers, generative AI companies, policymakers and the public about the potential impacts of AI-generated imagery on mental health discourse.MethodsWe prompted two generative AI image models, Midjourney V.6 and DALL-E 3 with isolated diagnostic terms for common mental health conditions. The resulting images were compiled and presented as examples of current AI behaviour when interpreting psychiatric terminology.FindingsThe AI models generated image outputs for most psychiatric diagnosis prompts. These images frequently reflected cultural stereotypes and historical visual tropes including gender biases and stigmatising portrayals of certain mental health conditions.DiscussionThese findings illustrate three key points. First, generative AI models reflect cultural perceptions of mental disorders rather than evidence-based clinical ones. Second, AI image outputs resurface historical biases and visual archetypes. Third, the dynamic nature of these models necessitates ongoing monitoring and proactive engagement to manage evolving biases. Addressing these challenges requires a collaborative effort among clinicians, AI developers and policymakers to ensure the responsible use of these technologies in mental health contexts.Clinical implicationsAs these technologies become increasingly accessible, it is crucial for mental health professionals to understand AI’s capabilities, limitations and potential impacts. Future research should focus on quantifying these biases, assessing their effects on public perception and developing strategies to mitigate potential harm while leveraging the insights these models provide into collective understandings of mental illness.
ObjectiveThis paper investigates how state-of-the-art generative artificial intelligence (AI) image models represent common psychiatric diagnoses. We offer key lessons derived from these representations to inform clinicians, researchers, generative AI companies, policymakers and the public about the potential impacts of AI-generated imagery on mental health discourse.MethodsWe prompted two generative AI image models, Midjourney V.6 and DALL-E 3 with isolated diagnostic terms for common mental health conditions. The resulting images were compiled and presented as examples of current AI behaviour when interpreting psychiatric terminology.FindingsThe AI models generated image outputs for most psychiatric diagnosis prompts. These images frequently reflected cultural stereotypes and historical visual tropes including gender biases and stigmatising portrayals of certain mental health conditions.DiscussionThese findings illustrate three key points. First, generative AI models reflect cultural perceptions of mental disorders rather than evidence-based clinical ones. Second, AI image outputs resurface historical biases and visual archetypes. Third, the dynamic nature of these models necessitates ongoing monitoring and proactive engagement to manage evolving biases. Addressing these challenges requires a collaborative effort among clinicians, AI developers and policymakers to ensure the responsible use of these technologies in mental health contexts.Clinical implicationsAs these technologies become increasingly accessible, it is crucial for mental health professionals to understand AI’s capabilities, limitations and potential impacts. Future research should focus on quantifying these biases, assessing their effects on public perception and developing strategies to mitigate potential harm while leveraging the insights these models provide into collective understandings of mental illness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.