Background Machine-learning or deep-learning algorithms for clinical diagnosis are inherently dependent on the availability of large-scale clinical datasets. Lack of such datasets and inherent problems such as overfitting often necessitate the development of innovative solutions. Probabilistic modeling closely mimics the rationale behind clinical diagnosis and represents a unique solution. Objective The aim of this study was to develop and validate a probabilistic model for differential diagnosis in different medical domains. Methods Numerical values of symptom-disease associations were utilized to mathematically represent medical domain knowledge. These values served as the core engine for the probabilistic model. For the given set of symptoms, the model was utilized to produce a ranked list of differential diagnoses, which was compared to the differential diagnosis constructed by a physician in a consult. Practicing medical specialists were integral in the development and validation of this model. Clinical vignettes (patient case studies) were utilized to compare the accuracy of doctors and the model against the assumed gold standard. The accuracy analysis was carried out over the following metrics: top 3 accuracy, precision, and recall. Results The model demonstrated a statistically significant improvement (P=.002) in diagnostic accuracy (85%) as compared to the doctors’ performance (67%). This advantage was retained across all three categories of clinical vignettes: 100% vs 82% (P<.001) for highly specific disease presentation, 83% vs 65% for moderately specific disease presentation (P=.005), and 72% vs 49% (P<.001) for nonspecific disease presentation. The model performed slightly better than the doctors’ average in precision (62% vs 60%, P=.43) but there was no improvement with respect to recall (53% vs 56%, P=.27). However, neither difference was statistically significant. Conclusions The present study demonstrates a drastic improvement over previously reported results that can be attributed to the development of a stable probabilistic framework utilizing symptom-disease associations to mathematically represent medical domain knowledge. The current iteration relies on static, manually curated values for calculating the degree of association. Shifting to real-world data–derived values represents the next step in model development.
The present study proposed a new statistical measure to find out the association between Drug-Adverse Drug Reaction (ADR) pair in the Pharmacovigilance system. The study proposed a subjective Bayesian measure corresponding to Proportional Reporting Ratio (PRR) parameter for quantitative signal detection. Classical and Bayesian inference procedure were used in analysis with four Drug-ADR pair as an example. We made prior information by expert opinion. The result of this analysis shows that Bayesian inference is more reliable as compared to classical inference. This study suggests that in case of spontaneous reporting in Pharmacovigilance system subjective Bayesian inference better to applying in case of small sample size.
BACKGROUND Machine-learning or deep-learning algorithms for clinical diagnosis are inherently dependent on the availability of large-scale clinical datasets. Lack of such datasets and inherent problems such as overfitting often necessitate the development of innovative solutions. Probabilistic modeling closely mimics the rationale behind clinical diagnosis and represents a unique solution. OBJECTIVE The aim of this study was to develop and validate a probabilistic model for differential diagnosis in different medical domains. METHODS Numerical values of symptom-disease associations were utilized to mathematically represent medical domain knowledge. These values served as the core engine for the probabilistic model. For the given set of symptoms, the model was utilized to produce a ranked list of differential diagnoses, which was compared to the differential diagnosis constructed by a physician in a consult. Practicing medical specialists were integral in the development and validation of this model. Clinical vignettes (patient case studies) were utilized to compare the accuracy of doctors and the model against the assumed gold standard. The accuracy analysis was carried out over the following metrics: top 3 accuracy, precision, and recall. RESULTS The model demonstrated a statistically significant improvement (<i>P</i>=.002) in diagnostic accuracy (85%) as compared to the doctors’ performance (67%). This advantage was retained across all three categories of clinical vignettes: 100% vs 82% (<i>P</i><.001) for highly specific disease presentation, 83% vs 65% for moderately specific disease presentation (<i>P</i>=.005), and 72% vs 49% (<i>P</i><.001) for nonspecific disease presentation. The model performed slightly better than the doctors’ average in precision (62% vs 60%, <i>P</i>=.43) but there was no improvement with respect to recall (53% vs 56%, <i>P</i>=.27). However, neither difference was statistically significant. CONCLUSIONS The present study demonstrates a drastic improvement over previously reported results that can be attributed to the development of a stable probabilistic framework utilizing symptom-disease associations to mathematically represent medical domain knowledge. The current iteration relies on static, manually curated values for calculating the degree of association. Shifting to real-world data–derived values represents the next step in model development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.