ImportanceThe rapid expansion of virtual health care has caused a surge in patient messages concomitant with more work and burnout among health care professionals. Artificial intelligence (AI) assistants could potentially aid in creating answers to patient questions by drafting responses that could be reviewed by clinicians.ObjectiveTo evaluate the ability of an AI chatbot assistant (ChatGPT), released in November 2022, to provide quality and empathetic responses to patient questions.Design, Setting, and ParticipantsIn this cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) on December 22 and 23, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals. Evaluators chose “which response was better” and judged both “the quality of information provided” (very poor, poor, acceptable, good, or very good) and “the empathy or bedside manner provided” (not empathetic, slightly empathetic, moderately empathetic, empathetic, and very empathetic). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians.ResultsOf the 195 questions and responses, evaluators preferred chatbot responses to physician responses in 78.6% (95% CI, 75.0%-81.8%) of the 585 evaluations. Mean (IQR) physician responses were significantly shorter than chatbot responses (52 [17-62] words vs 211 [168-245] words; t = 25.4; P < .001). Chatbot responses were rated of significantly higher quality than physician responses (t = 13.3; P < .001). The proportion of responses rated as good or very good quality (≥ 4), for instance, was higher for chatbot than physicians (chatbot: 78.5%, 95% CI, 72.3%-84.1%; physicians: 22.1%, 95% CI, 16.4%-28.2%;). This amounted to 3.6 times higher prevalence of good or very good quality responses for the chatbot. Chatbot responses were also rated significantly more empathetic than physician responses (t = 18.9; P < .001). The proportion of responses rated empathetic or very empathetic (≥4) was higher for chatbot than for physicians (physicians: 4.6%, 95% CI, 2.1%-7.7%; chatbot: 45.1%, 95% CI, 38.5%-51.8%; physicians: 4.6%, 95% CI, 2.1%-7.7%). This amounted to 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.ConclusionsIn this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.
As SARS-CoV-2 continues to spread and evolve, detecting emerging variants early is critical for public health interventions. Inferring lineage prevalence by clinical testing is infeasible at scale, especially in areas with limited resources, participation, or testing and/or sequencing capacity, which can also introduce biases1–3. SARS-CoV-2 RNA concentration in wastewater successfully tracks regional infection dynamics and provides less biased abundance estimates than clinical testing4,5. Tracking virus genomic sequences in wastewater would improve community prevalence estimates and detect emerging variants. However, two factors limit wastewater-based genomic surveillance: low-quality sequence data and inability to estimate relative lineage abundance in mixed samples. Here we resolve these critical issues to perform a high-resolution, 295-day wastewater and clinical sequencing effort, in the controlled environment of a large university campus and the broader context of the surrounding county. We developed and deployed improved virus concentration protocols and deconvolution software that fully resolve multiple virus strains from wastewater. We detected emerging variants of concern up to 14 days earlier in wastewater samples, and identified multiple instances of virus spread not captured by clinical genomic surveillance. Our study provides a scalable solution for wastewater genomic surveillance that allows early detection of SARS-CoV-2 variants and identification of cryptic transmission.
Author Contributions: Dr Ganson had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Concept and design: Ganson, Jackson, Nagata.
In spite of sharing a target site with neonicotinoids (the nicotinic acetylcholine receptor), sulfoxaflor was largely unaffected by existing cases of neonicotinoid resistance in B. tabaci and T. vaporariorum. Neonicotinoid resistance mechanisms in these whitefly species are known to be primarily based on enhanced detoxification of insecticide. This lack of cross-resistance indicates that sulfoxaflor is a valuable new tool for the management of sap-feeding pests already resistant to established insecticide groups.
Objective The COVID-19 pandemic changed clinician electronic health record (EHR) work in a multitude of ways. To evaluate how, we measure ambulatory clinician EHR use in the United States throughout the COVID-19 pandemic. Materials and Methods We use EHR meta-data from ambulatory care clinicians in 366 health systems using the Epic EHR system in the United States from December 2019 to December 2020. We used descriptive statistics for clinician EHR use including active-use time across clinical activities, time after-hours, and messages received. Multivariable regression to evaluate total and after-hours EHR work adjusting for daily volume and organizational characteristics, and to evaluate the association between messages and EHR time. Results Clinician time spent in the EHR per day dropped at the onset of the pandemic but had recovered to higher than prepandemic levels by July 2020. Time spent actively working in the EHR after-hours showed similar trends. These differences persisted in multivariable models. In-Basket messages received increased compared with prepandemic levels, with the largest increase coming from messages from patients, which increased to 157% of the prepandemic average. Each additional patient message was associated with a 2.32-min increase in EHR time per day (P < .001). Discussion Clinicians spent more total and after-hours time in the EHR in the latter half of 2020 compared with the prepandemic period. This was partially driven by increased time in Clinical Review and In-Basket messaging. Conclusions Reimbursement models and workflows for the post-COVID era should account for these demands on clinician time that occur outside the traditional visit.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.