Objective:
This study aimed to evaluate the utility and efficacy of ChatGPT in addressing questions related to thyroid surgery, taking into account accuracy, readability, and relevance.
Methods:
A simulated physician-patient consultation on thyroidectomy surgery was conducted by posing 21 hypothetical questions to ChatGPT. Responses were evaluated using the DISCERN score by 3 independent ear, nose and throat specialists. Readability measures including Flesch Reading Ease), Flesch-Kincaid Grade Level, Gunning Fog Index, Simple Measure of Gobbledygook, Coleman-Liau Index, and Automated Readability Index were also applied.
Results:
The majority of ChatGPT responses were rated fair or above using the DISCERN system, with an average score of 45.44 ± 11.24. However, the readability scores were consistently higher than the recommended grade 6 level, indicating the information may not be easily comprehensible to the general public.
Conclusion:
While ChatGPT exhibits potential in answering patient queries related to thyroid surgery, its current formulation is not yet optimally tailored for patient comprehension. Further refinements are necessary for its efficient application in the medical domain.