More automated means of leveraging unstructured data from daily clinical practice is crucial as therapeutic options and access to individual-level health information increase. Research-minded oncologists may push the avenues of evidence-based research by taking advantage of the new technologies available with clinical NLP. As continued progress is made with applying NLP toward oncological research, incremental gains will lead to large impacts, building a cost-effective infrastructure for advancing cancer care.
Coronavirus disease 2019 (COVID-19) is a global pandemic. Although much has been learned about the novel coronavirus since its emergence, there are many open questions related to tracking its spread, describing symptomology, predicting the severity of infection, and forecasting healthcare utilization. Free-text clinical notes contain critical information for resolving these questions. Data-driven, automatic information extraction models are needed to use this text-encoded information in large-scale studies. This work presents a new clinical corpus, referred to as the COVID-19 Annotated Clinical Text (CACT) Corpus, which comprises 1,472 notes with detailed annotations characterizing COVID-19 diagnoses, testing, and clinical presentation. We introduce a span-based event extraction model that jointly extracts all annotated phenomena, achieving high performance in identifying COVID-19 and symptom events with associated assertion values (0.83-0.97 F1 for events and 0.73-0.79 F1 for assertions). Our span-based event extraction model outperforms an extractor built on MetaMapLite for the identification of symptoms with assertion values. In a secondary use application, we predicted COVID-19 test results using structured patient data (e.g. vital signs and laboratory results) and automatically extracted symptom information, to explore the clinical presentation of COVID-19. Automatically extracted symptoms improve COVID-19 prediction performance, beyond structured data alone.
Introduction:A key attribute of a learning health care system is the ability to collect and analyze routinely collected clinical data in order to quickly generate new clinical evidence, and to monitor the quality of the care provided. To achieve this vision, clinical data must be easy to extract and stored in computer readable formats. We conducted this study across multiple organizations to assess the availability of such data specifically for comparative effectiveness research (CER) and quality improvement (QI) on surgical procedures.Setting:This study was conducted in the context of the data needed for the already established Surgical Care and Outcomes Assessment Program (SCOAP), a clinician-led, performance benchmarking, and QI registry for surgical and interventional procedures in Washington State.Methods:We selected six hospitals, managed by two Health Information Technology (HIT) groups, and assessed the ease of automated extraction of the data required to complete the SCOAP data collection forms. Each data element was classified as easy, moderate, or complex to extract.Results:Overall, a significant proportion of the data required to automatically complete the SCOAP forms was not stored in structured computer-readable formats, with more than 75 percent of all data elements being classified as moderately complex or complex to extract. The distribution differed significantly between the health care systems studied.Conclusions:Although highly desirable, a learning health care system does not automatically emerge from the implementation of electronic health records (EHRs). Innovative methods to improve the structured capture of clinical data are needed to facilitate the use of routinely collected clinical data for patient phenotyping.
Objectives We describe the evaluation of a system to create hospital progress notes using voice and electronic health record integration to determine if note timeliness, quality, and physician satisfaction are improved. Materials and methods We conducted a randomized controlled trial to measure effects of this new method of writing inpatient progress notes, which evolved over time, on important outcomes. Results Intervention subjects created 709 notes and control subjects created 1143 notes. When adjusting for clustering by provider and secular trends, there was no significant difference between the intervention and control groups in the time between when patients were seen on rounds and when progress notes were viewable by others (95% confidence interval −106.9 to 12.2 min). There were no significant differences in physician satisfaction or note quality between intervention and control. Discussion Though we did not find support for the superiority of this system (Voice-Generated Enhanced Electronic Note System [VGEENS]) for our 3 primary outcomes, if notes are created using voice during or soon after rounds they are available within 10 min. Shortcomings that likely influenced subject satisfaction include the early state of our VGEENS and the short interval for system development before the randomized trial began. Conclusion VGEENS permits voice dictation on rounds to create progress notes and can reduce delay in note availability and may reduce dependence on copy/paste within notes. Timing of dictation determines when notes are available. Capturing notes in near-real-time has potential to apply NLP and decision support sooner than when notes are typed later in the day, and to improve note accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.