Artificial intelligence (AI) has undergone cycles of enthusiasm and stagnation, often referred to as “AI winters.” The introduction of large language models (LLMs), such as OpenAI’s ChatGPT in late 2022, has revitalized interest in AI, particularly within health-care applications, including radiology. The roots of AI in language processing can be traced back to Alan Turing’s 1950 work, which established foundational principles for natural language processing (NLP). Early iterations of NLP primarily concentrated on natural language understanding (NLU) and natural language generation (NLG), but they faced significant challenges related to contextual comprehension and the handling of lengthy text sequences. Recent advancements in NLP have demonstrated considerable promise in automating the analysis of unstructured data, including electronic health records and radiology reports. LLMs, which are based on the transformer architecture introduced in 2017, excel at capturing complex language dependencies and facilitating tasks, such as report generation and clinical decision support. This review critically examines the evolution from traditional NLP to LLMs, highlighting their transformative potential within the field of radiology. Despite the advantages presented by LLMs, challenges persist, including concerns regarding data privacy, the potential for generating misinformation, and the imperative for rigorous validation protocols. Addressing these challenges is crucial for harnessing the full potential of LLMs to enhance diagnostic precision and workflow efficiency in radiology, ultimately improving patient care and outcomes.