Objective.
To evaluate the impact of prompt construction on the quality of AI chatbot responses in the context of head and neck surgery.
Study design.
Observational and evaluative study.
Setting.
International collaboration involving 16 researchers from 11 European centers specializing in head and neck surgery.
Methods.
A total of 24 questions, divided into clinical scenarios, theoretical questions, and patient inquiries, were developed. These questions were inputted into ChatGPT-4o both with and without the use of a structured prompt format, known as SMART (Seeker, Mission, AI Role, Register, Targeted Question). The AI-generated responses were evaluated by experienced head and neck surgeons using the QAMAI instrument, which assesses accuracy, clarity, relevance, completeness, source quality, and usefulness.
Results.
The responses generated using the SMART prompt scored significantly higher across all QAMAI dimensions compared to those without contextualized prompts. Median QAMAI scores for SMART prompts were 27.5 (IQR 25–29) versus 24 (IQR 21.8–25) for unstructured prompts (p < 0.001). Clinical scenarios and patient inquiries showed the most significant improvements, while theoretical questions also benefited but to a lesser extent. The AI's source quality improved notably with the SMART prompt, particularly in theoretical questions.
Conclusions.
The study suggests that the structured SMART prompt format significantly enhances the quality of AI chatbot responses in head and neck surgery. This approach improves the accuracy, relevance, and completeness of AI-generated information, underscoring the importance of well-constructed prompts in clinical applications. Further research is warranted to explore the applicability of SMART prompts across different medical specialties and AI platforms.