Background:The public launch of OpenAI's ChatGPT generated immediate interest in the use of large language models (LLMs). Healthcare institutions are now grappling with establishing policies and guidelines for use of these technologies, yet little is known about how healthcare providers view LLMs in medical settings. Moreover, there are no studies of how pediatric providers are adopting these readily accessible tools.Objective: This study aims to determine how pediatric providers are currently using LLMs in their work as well as their interest in using a HIPAA-compliant version of ChatGPT in the future.
Methods:A survey instrument consisting of structured and unstructured questions was iteratively developed and then sent via REDCap to all Boston Children's Hospital prescribers. Participation was voluntary and uncompensated; all survey responses were anonymous.Results: Surveys were completed by 390 pediatric providers. Approximately 50% of respondents had used an LLM; of these, 75% were already using an LLM for non-clinical work and 27% for clinical work. Providers detailed various ways they are currently using an LLM in their clinical and non-clinical work. Only 29% indicated that ChatGPT should be used for patient care in its present state; however, 73% reported they would use a HIPAA-compliant version of ChatGPT if one were available. Providers' proposed future uses of LLMs in healthcare are also described.Conclusions: Despite significant concerns and barriers to LLM use in healthcare, pediatric providers are already using LLMs at work. This study will give healthcare leaders and policymakers needed information about how providers are using LLMs in a clinical context.