The rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians to handle and integrate the heterogeneous, yet complementary data that are produced during routine practice. For instance, the personalized diagnosis and treatment planning for a single cancer patient relies on various image (e.g., radiological, pathological and camera image) and non-image data (e.g., clinical data andgenomic data). However, such decision-making procedures can be subjective, qualitative, and have large inter-subject variabilities. With the recent advances in multi-modal deep learning technologies, an increasingly large number of efforts have been devoted to a key question: how do we extract and aggregate multi-modal information to ultimately provide more objective, quantitative computer-aided clinical decision making? This paper reviews the recent studies on dealing with such a question. Briefly, this review will include the (1) overview of current multi-modal learning workflows, (2) summarization of multi-modal fusion methods, (3) discussion of the performance, (4) applications in disease diagnosis and prognosis, and (5) challenges and future directions.
Purpose Evaluate whether lesion radiomics features and absorbed dose metrics extracted from post-therapy 90 Y PET can be integrated to better predict outcome in microsphere radioembolization of liver malignancies. Methods Given the noisy nature of 90 Y PET, first, a liver phantom study with repeated acquisitions and varying reconstruction parameters was used to identify a subset of robust radiomics features for the patient analysis. In 36 radioembolization procedures, 90 Y PET/CT was performed within a couple of hours to extract 46 radiomics features and estimate absorbed dose in 105 primary and metastatic liver lesions. Robust radiomics modeling was based on bootstrapped multivariate logistic regression with shrinkage regularization (LASSO) and Cox regression with LASSO. Nested cross-validation and bootstrap resampling were used for optimal parameter/feature selection and for guarding against overfitting risks. Spearman rank correlation was used to analyze feature associations. Area under the receiver-operating characteristics curve (AUC) was used for lesion response (at first follow-up) analysis while Kaplan-Meier plots and c-index were used to assess progression model performance. Models with absorbed dose only, radiomics only and combined models were developed to predict lesion outcome. Results The phantom study identified 15/46 reproducible and robust radiomics features that were subsequently used in the patient models. A lesion response model with zone percentage (ZP) and mean absorbed dose achieved an AUC of 0.729 (95%CI: 0.702-0.758) and a progression model with zone size nonuniformity (ZSN) and absorbed dose achieved a c-index of 0.803 (95% CI: 0.790-0.815) on nested cross validation (CV). The combined models outperformed the radiomics only and absorbed dose only models. Conclusion We have developed new lesion-level response and progression models using textural radiomics features, derived from 90 Y PET combined with mean absorbed dose for predicting outcome in radioembolization. These encouraging results may need further validation in independent datasets prior to clinical adoption.
Diffusion weighted magnetic resonance imaging (DW-MRI) captures tissue microarchitecture at a millimeter scale. With recent advantages in data sharing, large-scale multi-site DW-MRI datasets are being made available for multi-site studies. However, DW-MRI suffers from measurement variability (e.g., inter-and intra-site variability, hardware performance, and sequence design), which consequently yields inferior performance on multi-site and/or longitudinal diffusion studies. In this study, we propose a novel, deep learning-based method to harmonize DW-MRI signals for a more reproducible and robust estimation of microstructure. Our method introduces a data-driven scanner-invariant regularization scheme to model a more robust fiber orientation distribution function (FODF) estimation. We study the Human Connectome Project (HCP) young adults test-retest group as well as the MASiVar dataset (with inter-and intra-site scan/rescan data). The 8 th order spherical harmonics coefficients are employed as data representation. The results show that the proposed harmonization approach maintains higher angular correlation coefficients (ACC) with the ground truth signals (0.954 versus 0.942), while achieves higher consistency of FODF signals for intra-scanner data (0.891 versus 0.826), as compared with the baseline supervised deep learning scheme. Furthermore, the proposed data-driven framework is flexible and potentially applicable to a wider range of data harmonization problems in neuroimaging.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.