2022
DOI: 10.1016/j.isci.2022.103767
|View full text |Cite
|
Sign up to set email alerts
|

Spectral decoupling for training transferable neural networks in medical imaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 22 publications
(34 reference statements)
1
4
0
Order By: Relevance
“…However, when we tested the methods on somatic variant calls before quality filtering, we discovered that all the tested machine learning models, including MuAt, were sensitive to variant calling artefacts. This finding builds upon previous reports of fragility in ML models and emphasizes the importance of using highquality data and creating ML models that are robust to input data variability resulting from differences in data generating processes [57].…”
Section: Discussionsupporting
confidence: 81%
See 1 more Smart Citation
“…However, when we tested the methods on somatic variant calls before quality filtering, we discovered that all the tested machine learning models, including MuAt, were sensitive to variant calling artefacts. This finding builds upon previous reports of fragility in ML models and emphasizes the importance of using highquality data and creating ML models that are robust to input data variability resulting from differences in data generating processes [57].…”
Section: Discussionsupporting
confidence: 81%
“…Deep learning models have been reported to be fragile, or sensitive to small changes in input data, leading to incorrect or misleading conclusions drawn from model outputs [56,57]. As model fragility can be a significant challenge when deploying machine learning models for clinical use [58], we investigated whether MuAt models would be able to maintain robustness when faced with shifts in input data distribution.…”
Section: Discussionmentioning
confidence: 99%
“… 28 detection and automated GG grading CNN, semi-supervised learning biopsy training: Local (580 slides), PANDA (Radboud) dataset external: PANDA (Karolinska) dataset detection: AUC: 0.92 GG grading # : accuracy: 0.831; κ quad 0.93 GG2 vs. GG3-5: AUC: 0.93 Pohjonen et al. 29 improve generalization problem for the detection of prostate cancer neural network trained with spectral decoupling training: 90 patients from Helsinki external: PESO dataset networks trained with spectral decoupling achieve up to 9.5% point higher accuracy on external datasets (The author did not report the exact value of accuracy.) PANDA challenge 30 detection and automated GG grading various DL algorithms biopsy training: PANDA dataset external: two cohorts (714 and 330 slides) detection: sensitivity 0.986 and 0.0.977; specificity 0.752 and 0.843 GG grading # : κ quad 0.862 and 0.868 Silva-Rodríguez et al.…”
Section: Development Of Ai Models For Prostate Cancer Managementmentioning
confidence: 99%
“… 30 , 50 Heterogeneity can stem from various factors, such as staining variations, artifacts, and imaging differences between scanners. 29 , 75 , 106 , 107 To overcome this, an ideal approach is to have a sufficiently large and diverse training set, such as continuously collecting all cases over a certain period of time from multiple institutions, in order to cover all possible variations in the real world to represent the entire target population. From the perspective of fully utilizing the existing data, data augmentation techniques such as rotation, flipping, and color enhancement can be applied to enhance the original training set.…”
Section: Challenges Of Application Of Ai In Clinicmentioning
confidence: 99%
“…Spectral decoupling, weight normalization, and L1 penalty have been applied to varying degrees in studies concerning computational pathologyonce [1], twice [2,3], and several times [4], respectively. As reported in general use cases, these methods for inducing sparsity tend to improve overall performance.…”
Section: Introductionmentioning
confidence: 99%