Introduction Identifying occurrences of medication side effects and adverse drug events (ADEs) is an important and challenging task because they are frequently only mentioned in clinical narrative and are not formally reported. Methods We developed a natural language processing (NLP) system that aims to identify mentions of symptoms and drugs in clinical notes and label the relationship between the mentions as indications or ADEs. The system leverages an existing word embeddings model with induced word clusters for dimensionality reduction. It employs a conditional random field (CRF) model for named entity recognition (NER) and a random forest model for relation extraction (RE). Results Final performance of each model was evaluated separately and then combined on a manually annotated evaluation set. The micro-averaged F1 score was 80.9% for NER, 88.1% for RE, and 61.2% for the integrated systems. Outputs from our systems were submitted to the NLP Challenges for Detecting Medication and Adverse Drug Events from Electronic Health Records (MADE 1.0) competition (Yu et al. in http://bio-nlp.org/index.php/projects/39-nlp-challenges , 2018 ). System performance was evaluated in three tasks (NER, RE, and complete system) with multiple teams submitting output from their systems for each task. Our RE system placed first in Task 2 of the challenge and our integrated system achieved third place in Task 3. Conclusion Adding to the growing number of publications that utilize NLP to detect occurrences of ADEs, our study illustrates the benefits of employing innovative feature engineering.
Objective To evaluate the feasibility, accuracy, and interoperability of a natural language processing (NLP) system that extracts diagnostic assertions of pneumonia in different clinical notes and institutions. Materials and Methods A rule-based NLP system was designed to identify assertions of pneumonia in 3 types of clinical notes from electronic health records (EHRs): emergency department notes, radiology reports, and discharge summaries. The lexicon and classification logic were tailored for each note type. The system was first developed and evaluated using annotated notes from the Department of Veterans Affairs (VA). Interoperability was assessed using data from the University of Utah (UU). Results The NLP system was comprised of 782 rules and achieved moderate-to-high performance in all 3 note types in VA (precision/recall/f1: emergency = 88.1/86.0/87.1; radiology = 71.4/96.2/82.0; discharge = 88.3/93.0/90.1). When applied to UU data, performance was maintained in emergency and radiology but decreased in discharge summaries (emergency = 84.7/94.3/89.3; radiology = 79.7/100.0/87.9; discharge = 65.5/92.7/76.8). Customization with 34 additional rules increased performance for all note types (emergency = 89.3/94.3/91.7; radiology = 87.0/100.0/93.1; discharge = 75.0/95.1/83.4). Conclusion NLP can be used to accurately identify the diagnosis of pneumonia across different clinical settings and institutions. A limited amount of customization to account for differences in lexicon, clinical definition of pneumonia, and EHR structure can achieve high accuracy without substantial modification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.