BACKGROUND
Puerperal mastitis, affecting up to 30% of breastfeeding women, is a common condition managed in general surgery outpatient clinics. The increasing integration of artificial intelligence (AI) into healthcare presents opportunities for enhancing decision-making processes.
OBJECTIVE
This study evaluates ChatGPT’s responses to frequently asked questions about puerperal mastitis, focusing on their quality, adherence to clinical guidelines, and patient safety.
METHODS
Fifteen questions were categorized into general information (n=2), symptoms and diagnosis (n=6), treatment (n=2), and prognosis (n=5). These questions, written in Turkish to reflect the target population, were input into ChatGPT 4.0. Responses were evaluated by five raters using criteria including sufficient length, understandability, accuracy, adherence to literature, and patient safety. DISCERN and Flesch-Kincaid readability scores were also calculated. Statistical analyses compared ratings across question categories and criteria.
RESULTS
ChatGPT’s responses were rated as “excellent” overall, with higher scores for treatment and prognosis-related questions. DISCERN scores showed significant differences between question categories (p=0.014), with treatment questions receiving the highest ratings. Flesch-Kincaid scores indicated readability at a university graduate level. While strong correlations were observed between adherence to literature and patient safety in certain questions, evaluator consistency varied, with significant differences in accuracy (p<0.001).
CONCLUSIONS
ChatGPT demonstrated adequate capability in providing information on puerperal mastitis, particularly for treatment and prognosis. However, evaluator variability and the subjective nature of assessments highlight the need for further optimization of AI tools. Future research should emphasize iterative questioning and dynamic updates to AI knowledge bases to enhance reliability and accessibility