Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
BACKGROUND Large language models (LLMs) have received much attention and show their potential in digital health, while their application in mental health is subject to ongoing debate. This systematic review aims to summarize and characterize the use of LLMs in mental health by investigating the strengths and limitations of the latest work in LLMs and discusses the challenges and opportunities for early screening, digital interventions, and other clinical applications in mental health. OBJECTIVE This systematic review aims to summarize how LLMs are used in mental health. We focus on the models, data sources, methodologies, and main outcomes in existing work, in order to assess the applicability of LLMs to early screening, digital interventions, and other clinical applications. METHODS Adhering to the PRISMA guidelines, this review searched three open-access databases: PubMed, DBLP Computer Science Bibliography (DBLP), and IEEE Xplore (IEEE). Keywords used were: (mental health OR mental illness OR mental disorder OR psychology OR depression OR anxiety) AND (large language models OR LLMs OR GPT OR ChatGPT OR BERT OR Transformer OR LaMDA OR PaLM OR Claude). We included articles published between January 1, 2017, and September 1, 2023, and excluded non-English articles. RESULTS In total, 32 articles were evaluated, including mental health analysis using social media datasets (n=13), LLMs usage for mental health chatbots (n=10), and other applications of LLMs in mental health (n=9). LLMs exhibit substantial effectiveness in classifying and detecting mental health issues and offer more efficient and personalized healthcare to improve telepsychological services. However, assessments also indicate that the current risks associated with the clinical use might surpass their benefits. These risks include inconsistencies in generated text, the production of hallucinatory content, and the absence of a comprehensive ethical framework. CONCLUSIONS This systematic review examines the clinical applications of LLMs in mental health, highlighting their potential and their inherent risks. The study identifies significant concerns, including inherent biases in training data, ethical dilemmas, challenges in interpreting the 'black box' nature of LLMs, and concerns about the accuracy and reliability of the content they produce. Consequently, LLMs should not be considered substitutes for professional mental health services. Despite these challenges, the rapid advancement of LLMs may highlight their potential as new clinical tools, emphasizing the need for continued research and development in this field.
BACKGROUND Large language models (LLMs) have received much attention and show their potential in digital health, while their application in mental health is subject to ongoing debate. This systematic review aims to summarize and characterize the use of LLMs in mental health by investigating the strengths and limitations of the latest work in LLMs and discusses the challenges and opportunities for early screening, digital interventions, and other clinical applications in mental health. OBJECTIVE This systematic review aims to summarize how LLMs are used in mental health. We focus on the models, data sources, methodologies, and main outcomes in existing work, in order to assess the applicability of LLMs to early screening, digital interventions, and other clinical applications. METHODS Adhering to the PRISMA guidelines, this review searched three open-access databases: PubMed, DBLP Computer Science Bibliography (DBLP), and IEEE Xplore (IEEE). Keywords used were: (mental health OR mental illness OR mental disorder OR psychology OR depression OR anxiety) AND (large language models OR LLMs OR GPT OR ChatGPT OR BERT OR Transformer OR LaMDA OR PaLM OR Claude). We included articles published between January 1, 2017, and September 1, 2023, and excluded non-English articles. RESULTS In total, 32 articles were evaluated, including mental health analysis using social media datasets (n=13), LLMs usage for mental health chatbots (n=10), and other applications of LLMs in mental health (n=9). LLMs exhibit substantial effectiveness in classifying and detecting mental health issues and offer more efficient and personalized healthcare to improve telepsychological services. However, assessments also indicate that the current risks associated with the clinical use might surpass their benefits. These risks include inconsistencies in generated text, the production of hallucinatory content, and the absence of a comprehensive ethical framework. CONCLUSIONS This systematic review examines the clinical applications of LLMs in mental health, highlighting their potential and their inherent risks. The study identifies significant concerns, including inherent biases in training data, ethical dilemmas, challenges in interpreting the 'black box' nature of LLMs, and concerns about the accuracy and reliability of the content they produce. Consequently, LLMs should not be considered substitutes for professional mental health services. Despite these challenges, the rapid advancement of LLMs may highlight their potential as new clinical tools, emphasizing the need for continued research and development in this field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.