The realm of clinical medicine stands on the brink of a revolutionary break-through as large language models (LLMs) emerge as formidable allies, propelled by the prowess of deep learning and a wealth of clinical data. Yet, amidst the dis-quieting specter of misdiagnoses haunting the halls of medical treatment, LLMs offer a glimmer of hope, poised to reshape the landscape. However, their mettle and medical acumen, particularly in the crucible of real-world professional sce-narios replete with intricate logical interconnections, remain shrouded in uncer-tainty. To illuminate this uncharted territory, we present an audacious quantitative evaluation method, harnessing the ingenuity of thoracic surgery questions as the litmus test for LLMs' medical prowess. These clinical questions covering various diseases were collected, and a test format consisting of multi-choice questions and case analysis was designed based on the Chinese National Senior Health Professional Technical Qualification Examination. Five LLMs of different scales and sources were utilized to answer these questions, and evaluation and feedback were provided by professional thoracic surgeons. Among these models, GPT-4 demonstrated the highest performance with a score of 48.67 out of 100, achieving accuracies of 0.62, 0.27, and 0.63 in single-choice, multi-choice, and case-analysis questions, respectively. However, further improvement is still necessary to meet the passing threshold of the examination. Additionally, this paper analyz-es the performance, advantages, disadvantages, and risks of LLMs, and proposes suggestions for improvement, providing valuable insights into the capabilities and limitations of LLMs in the specialized medical domain.