BACKGROUND
The advent of AI-based large language models (LLMs) in 2022 has given rise to a plethora of discussions within the academic community. The discourse is multifaceted, with enthusiastic users lauding the sophisticated chatbots' potential to assist with writing tasks. Nevertheless, critics have warned that the cultural and ethical implications of relying on LLMs may be too costly to bear. The literature on LLMs spans multiple fields and often focuses on overlapping themes, such as their appropriate integration, analytical performance, and practical benefits for users. However, there is a notable gap in examining the nature of these discussions. The societal impact of LLMs ultimately depends on users' opinions, as technophiles and luddites shape the trajectory of technological adoption. To address this, our study assessed public opinion and perception of the most popular LLM available: ChatGPT.
OBJECTIVE
In our current work, we aimed to understand how LLMs opinions and sentiment shared by the general public may contrast the opinions shared among academic researchers and other field experts to gain a broader view for the future direction of LLMs in healthcare.
METHODS
We utilized the Academic Twitter API to retrieve tweets with search terms “ChatGPT AND (health OR healthcare OR hospital OR physician OR nurse OR nursing OR patient)”. This data collection process was executed for the period between December 1st, 2022, the day after ChatGPT became publicly available, to March 20th, 2023. Our analysis consisted of three phases: 1) Human-labeled sentiment tweet classification; 2) Algorithm-based sentiment tweet classification; and 3) Structural Topic Model to distinctly group tweet content.
RESULTS
Using an innovative approach that integrates the Syuzhet package with GPT-3.5, we achieved 84% accuracy in sentiment classification. Further investigation using structural topic modeling revealed eight distinct topics covering both optimistic and concerned perspectives. The results indicated a predominantly positive sentiment towards the integration of LLMs in healthcare, especially in areas such as patient care and decision making. However, notable concerns were raised in the areas of mental health support and patient communication.
CONCLUSIONS
This study highlights the significant potential of LLMs to transform healthcare, while also addressing the ethical and practical challenges. It further contributes to the ongoing scholarly discourse concerning the advantages and disadvantages of LLMs within the healthcare domain.