ObjectiveTo evaluate the quality of information regarding the prevention and treatment of COVID-19 available to the general public from all countries.DesignSystematic analysis using the ‘Ensuring Quality Information for Patients’ (EQIP) Tool (score 0–36), Journal of American Medical Association (JAMA) benchmark (score 0–4) and the DISCERN Tool (score 16–80) to analyse websites containing information targeted at the general public.Data sourcesTwelve popular search terms, including ‘Coronavirus’, ‘COVID-19 19’, ‘Wuhan virus’, ‘How to treat coronavirus’ and ‘COVID-19 19 Prevention’ were identified by ‘Google AdWords’ and ‘Google Trends’. Unique links from the first 10 pages for each search term were identified and evaluated on its quality of information.Eligibility criteria for selecting studiesAll websites written in the English language, and provides information on prevention or treatment of COVID-19 intended for the general public were considered eligible. Any websites intended for professionals, or specific isolated populations, such as students from one particular school, were excluded, as well as websites with only video content, marketing content, daily caseload update or news dashboard pages with no health information.ResultsOf the 1275 identified websites, 321 (25%) were eligible for analysis. The overall EQIP, JAMA and DISCERN scores were 17.8, 2.7 and 38.0, respectively. Websites originated from 34 countries, with the majority from the USA (55%). News Services (50%) and Government/Health Departments (27%) were the most common sources of information and their information quality varied significantly. Majority of websites discuss prevention alone despite popular search trends of COVID-19 treatment. Websites discussing both prevention and treatment (n=73, 23%) score significantly higher across all tools (p<0.001).ConclusionThis comprehensive assessment of online COVID-19 information using EQIP, JAMA and DISCERN Tools indicate that most websites were inadequate. This necessitates improvements in online resources to facilitate public health measures during the pandemic.
Background
ChatGPT-4 is the latest release of a novel artificial intelligence (AI) chatbot able to answer freely formulated and complex questions. In the near future, ChatGPT could become the new standard for health care professionals and patients to access medical information. However, little is known about the quality of medical information provided by the AI.
Objective
We aimed to assess the reliability of medical information provided by ChatGPT.
Methods
Medical information provided by ChatGPT-4 on the 5 hepato-pancreatico-biliary (HPB) conditions with the highest global disease burden was measured with the Ensuring Quality Information for Patients (EQIP) tool. The EQIP tool is used to measure the quality of internet-available information and consists of 36 items that are divided into 3 subsections. In addition, 5 guideline recommendations per analyzed condition were rephrased as questions and input to ChatGPT, and agreement between the guidelines and the AI answer was measured by 2 authors independently. All queries were repeated 3 times to measure the internal consistency of ChatGPT.
Results
Five conditions were identified (gallstone disease, pancreatitis, liver cirrhosis, pancreatic cancer, and hepatocellular carcinoma). The median EQIP score across all conditions was 16 (IQR 14.5-18) for the total of 36 items. Divided by subsection, median scores for content, identification, and structure data were 10 (IQR 9.5-12.5), 1 (IQR 1-1), and 4 (IQR 4-5), respectively. Agreement between guideline recommendations and answers provided by ChatGPT was 60% (15/25). Interrater agreement as measured by the Fleiss κ was 0.78 (P<.001), indicating substantial agreement. Internal consistency of the answers provided by ChatGPT was 100%.
Conclusions
ChatGPT provides medical information of comparable quality to available static internet information. Although currently of limited quality, large language models could become the future standard for patients and health care professionals to gather medical information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.