This paper explores the dual role of Large Language Models (LLMs) in the context of online misinformation and disinformation. In today's digital landscape, where the internet and social media facilitate the rapid dissemination of information, discerning between accurate content and falsified information presents a formidable challenge. Misinformation, often arising unintentionally, and disinformation, crafted deliberately, are at the forefront of this challenge. LLMs such as OpenAI's GPT-4, equipped with advanced language generation abilities, present a double-edged sword in this scenario. While they hold promise in combating misinformation by fact-checking and detecting LLM-generated text, their ability to generate realistic, contextually relevant text also poses risks for creating and propagating misinformation. Further, LLMs are plagued with many problems such as biases, knowledge cutoffs, and hallucinations, which may further perpetuate misinformation and disinformation. The paper outlines historical developments in misinformation detection and how it affects social media consumption, especially among youth, and introduces LLMs and their applications in various domains. It then critically analyzes the potential of LLMs to generate and counter misinformation and disinformation in sensitive topics such as healthcare, COVID-19, and political agendas. Further, it discusses mitigation strategies, ethical considerations, and regulatory measures, summarizing previous methods and proposing future research direction toward leveraging the benefits of LLMs while minimizing misuse risks. The paper concludes by acknowledging LLMs as powerful tools with significant implications in both spreading and combating misinformation in the digital age.