2021
DOI: 10.1016/j.ejmp.2021.02.006
|View full text |Cite
|
Sign up to set email alerts
|

AI applications to medical images: From machine learning to deep learning

Abstract: Artificial intelligence (AI) models are playing an increasing role in biomedical research and healthcare services. This review focuses on challenges points to be clarified about how to develop AI applications as clinical decision support systems in the real-world context. Methods: A narrative review has been performed including a critical assessment of articles published between 1989 and 2021 that guided challenging sections. Results: We first illustrate the architectural characteristics of machine learning (M… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
173
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 402 publications
(214 citation statements)
references
References 161 publications
(195 reference statements)
0
173
0
2
Order By: Relevance
“…This halts the training of initial lay because the gradient does not change the weights anymore. GANs are also prone to ge erate images with similar appearance as an effect of mode collapse [89] which occurs wh the generator produces only a limited or a single type of output to fool the discriminat Due to this the discriminator does not learn to come out of this trap resulting in a GA failure. Apart from this, GAN-based models can also add unrealistic artefacts in the i ages [87].…”
Section: Data Augmentation Using Gansmentioning
confidence: 99%
“…This halts the training of initial lay because the gradient does not change the weights anymore. GANs are also prone to ge erate images with similar appearance as an effect of mode collapse [89] which occurs wh the generator produces only a limited or a single type of output to fool the discriminat Due to this the discriminator does not learn to come out of this trap resulting in a GA failure. Apart from this, GAN-based models can also add unrealistic artefacts in the i ages [87].…”
Section: Data Augmentation Using Gansmentioning
confidence: 99%
“…The requirement of large data sets for training deep learning models in digital pathology often results in WSIs collected across multiple research centres being integrated into a single data set. However, variations in WSI preparation may result in batch effects across images which must be mitigated to reduce bias and improve the generalisability of models [22]. For example, varying concentrations and volumes of stain used in slide preparation, as well as exposure to light during storage, may result in biases between WSIs.…”
Section: Colour Normalisation and Augmentationmentioning
confidence: 99%
“…Lack of interpretability remains a major obstacle to the widespread adoption of deep learning systems in healthcare [22]. Meaningful understanding of deep learning predictions is crucial in healthcare in order to instil trust from both a clinician and patent's perspective, and hence enable clinical translation.…”
Section: Model Interpretationmentioning
confidence: 99%
“…When it comes to developing such applications, as reviewed by Castiglioni et al [8], each of the phases required for building them has its specific challenge. Researchers in AI need to collect a large set of high quality labelled and annotated data, as the accuracy of AI tools depends largely on the dataset used for training.…”
Section: The Pillars Of Ai Knowledge For Mpsmentioning
confidence: 99%