Abstract:In recent years enterprise imaging (EI) solutions have become a core component of healthcare initiatives, while a simultaneous rise in big data has opened up a number of possibilities in how we can analyze and derive insights from large amounts of medical data. Together they afford us a range of opportunities that can transform healthcare in many fields. This paper provides a review of recent developments in EI and big data in the context of medical physics. It summarizes the key aspects of EI and big data in … Show more
“…Various techniques can be employed to address the lack of data, such as, data augmentation [34,37]. Additional realistic data could be generated through GANs using the latent data distributions [13,34,37]. LRP pruning was shown to provide good results for transfer learning with small datasets [56].…”
Section: Data-centric Xaimentioning
confidence: 99%
“…The DNN model or the algorithm architecture also depends on hyperparameters that define the performance of the trained model [12]. Neural networks require a lot of training data and time before they can produce acceptable results but are like a 'black-box' providing little or no insight into the decision-making process [13]. The model opaqueness makes it hard even for the model developers to understand how the model is functioning [14].…”
“…Various techniques can be employed to address the lack of data, such as, data augmentation [34,37]. Additional realistic data could be generated through GANs using the latent data distributions [13,34,37]. LRP pruning was shown to provide good results for transfer learning with small datasets [56].…”
Section: Data-centric Xaimentioning
confidence: 99%
“…The DNN model or the algorithm architecture also depends on hyperparameters that define the performance of the trained model [12]. Neural networks require a lot of training data and time before they can produce acceptable results but are like a 'black-box' providing little or no insight into the decision-making process [13]. The model opaqueness makes it hard even for the model developers to understand how the model is functioning [14].…”
“…DSS systems often act as "black boxes", thus preventing the necessary confidence in their use from being built. Further research in the field of explainable AI (XAI) is mandatory to match the requirement to have DSS systems with high performance, which are generally complex and barely interpretable, with the need of providing responses that can be explained to patients [18,36,37,38,39].…”
Section: Current Limitations To the Use Of Ai-based Solutions In Clinical Workflowsmentioning
confidence: 99%
“…Additionally, Medical Physicists could act as facilitators in the current research challenge of making AI-models explainable, which is a fundamental requirement to make them acceptable as clinical support tools [36]. They are naturally the intermediary professionals between clinicians and technology, so they can help set up the appropriate dictionaries to translate machine information into human (clinical) language and vice versa, thus globally supporting XAI research.…”
“…Last but not least, the review of McCarthy et al [12] provides insights into recent enterprise imaging solutions applied to medical physics and healthcare settings. The rise in big data has opened up numerous opportunities for the application of enterprise imaging solutions to big data issues of healthcare.…”
Section: The Pillars Of Ai Knowledge For Mpsmentioning
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.