IntroductionAs a large language model, chat generative pretrained transformer (ChatGPT) has provided a valuable tool for various medical scenarios with its interactive dialogue‐based interface. However, there is a lack of studies on ChatGPT's effectiveness in handling dental tasks. This study aimed to compare the knowledge and comprehension abilities of ChatGPT‐3.5/4 with that of dental students about periodontal surgery.Materials and MethodsA total of 134 dental students participated in this study. We designed a questionnaire consisting of four questions about the inclination for ChatGPT, 25 multiple‐choice, and one open‐ended question. As the comparison of ChatGPT‐3.5 and 4, the question about the inclination was removed, and the rest was the same. The response time of ChatGPT‐3.5 and 4 as well as the comparison of ChatGPT‐3.5 and 4′ performances with dental students were measured. Regarding students' feedback on the open‐ended question, we also compared the outcomes of ChatGPT‐4′ and teacher's review.ResultsOn average, ChatGPT‐3.5 and 4 required 3.63 ± 1.18 s (95% confidence interval [CI], 3.14, 4.11) and 12.49 ± 7.29 s (95% CI, 9.48, 15.50), respectively (p < 0.001) for each multiple‐choice question. For these 25 questions, the accuracy was 21.51 ± 2.72, 14 and 20 for students, ChatGPT‐3.5 and 4, respectively. Furthermore, the outcomes of ChatGPT‐4's review were consistent with that of teacher's review.ConclusionsFor dental examinations related to periodontal surgery, ChatGPT's accuracy was not yet comparable to that of the students. Nevertheless, ChatGPT shows promise in assisting students with the curriculum and helping practitioners with clinical letters and reviews of students' textual descriptions.