Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall short in the face of increasingly sophisticated fake content. This review article explores the emerging role of Large Language Models (LLMs) in enhancing the detection of fake news and fake profiles. We provide a comprehensive overview of the nature and spread of disinformation, followed by an examination of existing detection methodologies. The article delves into the capabilities of LLMs in generating both fake news and fake profiles, highlighting their dual role as both a tool for disinformation and a powerful means of detection. We discuss the various applications of LLMs in text classification, fact-checking, verification, and contextual analysis, demonstrating how these models surpass traditional methods in accuracy and efficiency. Additionally, the article covers LLM-based detection of fake profiles through profile attribute analysis, network analysis, and behavior pattern recognition. Through comparative analysis, we showcase the advantages of LLMs over conventional techniques and present case studies that illustrate practical applications. Despite their potential, LLMs face challenges such as computational demands and ethical concerns, which we discuss in more detail. The review concludes with future directions for research and development in LLM-based fake news and fake profile detection, underscoring the importance of continued innovation to safeguard the authenticity of online information.
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall short in the face of increasingly sophisticated fake content. This review article explores the emerging role of Large Language Models (LLMs) in enhancing the detection of fake news and fake profiles. We provide a comprehensive overview of the nature and spread of disinformation, followed by an examination of existing detection methodologies. The article delves into the capabilities of LLMs in generating both fake news and fake profiles, highlighting their dual role as both a tool for disinformation and a powerful means of detection. We discuss the various applications of LLMs in text classification, fact-checking, verification, and contextual analysis, demonstrating how these models surpass traditional methods in accuracy and efficiency. Additionally, the article covers LLM-based detection of fake profiles through profile attribute analysis, network analysis, and behavior pattern recognition. Through comparative analysis, we showcase the advantages of LLMs over conventional techniques and present case studies that illustrate practical applications. Despite their potential, LLMs face challenges such as computational demands and ethical concerns, which we discuss in more detail. The review concludes with future directions for research and development in LLM-based fake news and fake profile detection, underscoring the importance of continued innovation to safeguard the authenticity of online information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.