Using natural language processing tools, we investigate the semantic differences in medical guidelines for three decision problems: breast cancer screening, lower back pain and hypertension management. The recommendation differences may cause undue variability in patient treatments and outcomes. Therefore, having a better understanding of their causes can contribute to a discussion on possible remedies. We show that these differences in recommendations are highly correlated with the knowledge brought to the problem by different medical societies, as reflected in the conceptual vocabularies used by the different groups of authors. While this article is a case study using three sets of guidelines, the proposed methodology is broadly applicable. Technically, our method combines word embeddings and a novel graph-based similarity model for comparing collections of documents. For our main case study, we use the CDC summaries of the recommendations (very short documents) and full (long) texts of guidelines represented as bags of concepts. For the other case studies, we compare the full text of guidelines with their abstracts and tables, summarizing the differences between recommendations. The proposed approach is evaluated using different language models and different distance measures. In all the experiments, the results are highly statistically significant. We discuss the significance of the results, their possible extensions, and connections to other domains of knowledge. We conclude that automated methods, although not perfect, can be applicable to conceptual comparisons of different medical guidelines and can enable their analysis at scale.
Background: Medical guidelines provide the conceptual link between a diagnosis and a recommendation. They often disagree on their recommendations. There are over thirty five thousand guidelines indexed by PubMed, which creates a need for automated methods for analysis of recommendations, i.e., recommended actions, for similar conditions. Results: This article advances the state of the art in text understanding of medical guidelines by showing the applicability of transformer-based models and transfer learning (domain adaptation) to the problem of finding condition-action and other conditional sentences. We report results of three studies using syntactic, semantic and deep learning methods, with and without transformer-based models such as BioBERT and BERT. We perform in depth evaluation on a set of three annotated medical guidelines. Our experiments show that a combination of machine learning domain adaptation and transfer can improve the ability to automatically find conditional sentences in clinical guidelines. We show substantial improvements over prior art (up to 25%), and discuss several directions of extending this work, including addressing the problem of paucity of annotated data.Conclusion: Modern deep learning methods, when applied to the text of clinical guidelines, yield substantial improvements in our ability to find sentences expressing the relations of condition-consequence, condition-action and action.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.