2021
DOI: 10.1007/978-3-030-92659-5_39
|View full text |Cite
|
Sign up to set email alerts
|

How Reliable Are Out-of-Distribution Generalization Methods for Medical Image Segmentation?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 8 publications
1
7
0
Order By: Relevance
“…While these heatmaps showed that the authors' models only focused on the part of the input image containing the cancerous cell, the heatmaps were not precise enough to show which parts of the cell were important. Furthermore, DL models are great at learning statistical biases in the data, which may be specific to certain acquisition protocols or hardware 9 . This in turn can cause a decrease in generalizability of the model and decreased performances on external test data, as could also be observed in this study.…”
mentioning
confidence: 54%
“…While these heatmaps showed that the authors' models only focused on the part of the input image containing the cancerous cell, the heatmaps were not precise enough to show which parts of the cell were important. Furthermore, DL models are great at learning statistical biases in the data, which may be specific to certain acquisition protocols or hardware 9 . This in turn can cause a decrease in generalizability of the model and decreased performances on external test data, as could also be observed in this study.…”
mentioning
confidence: 54%
“…This is denoted as silent failure . Allowing for model adaptation also makes it possible for the model to work in new hospital environments 9 10 16 .…”
Section: Methodsmentioning
confidence: 99%
“…After the radiologist completes the report, the new ground truth annotation can be fed back to the AI model for further improvement. This functionality opens up the possibility of adapting the AI system over time, which prevents the deterioration of model performance as time passes and data distribution changes [9,10]. This loss in performance often goes unnoticed, as deep learning models report high confidence even for low-quality predictions.…”
Section: Integration and Workflow For Lifelong Learningmentioning
confidence: 99%
“…We hereby present MiniVess, an expert-annotated dataset of 70 3D 2PFM image volumes of rodent cerebrovasculature. The dataset can be used for training segmentation networks 38,39 , fine-tuning self-supervised pre-trained networks >31,>32,>40 , and as an external validation set for assessing a model’s generalizability 41 . The 3D volumes in this dataset have been curated to only contain clean XYZ imaging in order to ensure correct and consistent annotations, or segmentations, which has been observed to be integral to the evaluation of machine learning models 42 .…”
Section: Background and Summarymentioning
confidence: 99%