BACKGROUND
In the digital age, Large Language Models (LLMs) like ChatGPT have emerged as important sources of healthcare information. Their interactive capabilities offer promise for enhancing health access, particularly for groups facing traditional barriers such as insurance and language constraints. Despite their growing public health use, with millions of medical queries processed weekly, the quality of LLM-provided information remains inconsistent. Prior studies have predominantly assessed ChatGPT’s English responses, overlooking the needs of non-English speakers in the U.S. This study addresses this gap by evaluating the quality and linguistic parity of vaccination information from ChatGPT and the CDC, emphasizing health equity.
OBJECTIVE
This research aims to assess the quality and language equity of vaccination information provided by ChatGPT and the CDC in English and Spanish. It highlights the critical need for cross-language evaluation to ensure equitable health information access for all linguistic groups.
METHODS
We conducted a comparative analysis of ChatGPT’s and CDC’s responses to frequently asked vaccination questions in both languages. The evaluation encompassed quantitative and qualitative assessments of accuracy, readability, and understandability. Accuracy was gauged by the perceived level of misinformation, readability by the Flesch-Kincaid score and grade level, and understandability by items from the NIH’s PEMAT instrument.
RESULTS
The study found that both ChatGPT and CDC provided mostly accurate and understandable responses. However, readability scores often exceeded the American Medical Association’s recommended levels, particularly in English. CDC responses outperformed ChatGPT in readability across both languages. Notably, some Spanish responses appeared to be direct translations from English, leading to unnatural phrasing. The findings underscore the potential and challenges of utilizing ChatGPT for healthcare access.
CONCLUSIONS
ChatGPT holds potential as a health information resource, but requires improvements in readability and linguistic equity to be truly effective for diverse populations. Crucially, the default user experience with ChatGPT, typically encountered by those without advanced language and prompting skills, can significantly shape health perceptions. This is vital from a public health standpoint, as the majority of users will interact with LLMs in their most accessible form. Ensuring that default responses are accurate, understandable, and equitable is imperative for fostering informed health decisions across diverse communities.