ChatGPT-3.5's underperformance on the 2022 Self-Assessment Study Program (SASP) examination presents an ideal time to address the progress of artificial intelligence (AI) in urology. 1 Previously, we had high hopes that it could be used in the clinical support setting and, to our excitement, it has excelled. 2,3 Gabrielson et al mused that the use of AI in the field is "limited only by our imagination." 4 In the interest of progress, the large language model (LLM) was then assessed by Huynh et al in its urological education capabilities, where it had a poor showing of 28.2% in the SASP. 1 This unimpressive performance posed the question: are we overestimating the capabilities of LLMs or are we merely witnessing the growing pains of a developing technology? Perhaps we can boast that urology is more complex than other disciplines, leading to specialty-specific or even topicspecific competencies of the LLM! While we partially touch upon this nuance, this editorial serves to review the landscape of AI in urology and offer findings to support our optimism in its progression.LLMs have performed well in other disciplines' examinations, hence our surprise when comparing ChatGPT's performance in the SASP to the developing body of medical literature in other specialties. The accuracy of 28.2% falls short of its performance in the 2021 and 2022 American College of Gastroenterology self-assessment examinations (68.3% in ChatGPT-3.0), the Dermatology Specialty Certificate examination (63% in ChatGPT-3.5), and the Ophthalmic Knowledge Assessment Program examination (46.4% in ChatGPT-3.5; see Figure ). [5][6][7] Its propensity to err is attributed to a slew of causes, including its potential to generate false information (hallucinate), the knowledge cutoff of September 2021, the lack of indexed information, or the inability to adequately contextualize an inquiry. [5][6][7] These shortcomings have been documented in the literature, yet