2021
DOI: 10.48550/arxiv.2109.01668
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

How Reliable Are Out-of-Distribution Generalization Methods for Medical Image Segmentation?

Abstract: The recent achievements of Deep Learning rely on the test data being similar in distribution to the training data. In an ideal case, Deep Learning models would achieve Out-of-Distribution (OoD) Generalization, i.e. reliably make predictions on out-of-distribution data. Yet in practice, models usually fail to generalize well when facing a shift in distribution. Several methods were thereby designed to improve the robustness of the features learned by a model through Regularizationor Domain-Prediction-based sche… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…We hereby present MiniVess, an expert-annotated dataset of 70 3D 2PFM image volumes of rodent cerebrovasculature. The dataset can be used for training segmentation networks 38,39 , fine-tuning self-supervised pre-trained networks 31,32,40 , and as an external validation set for assessing a model's generalizability 41 . The 3D volumes in this dataset have been curated to only contain clean XYZ imaging in order to ensure correct and consistent annotations, or segmentations, which has been observed to be integral to the evaluation of machine learning models 42 .…”
mentioning
confidence: 99%
“…We hereby present MiniVess, an expert-annotated dataset of 70 3D 2PFM image volumes of rodent cerebrovasculature. The dataset can be used for training segmentation networks 38,39 , fine-tuning self-supervised pre-trained networks 31,32,40 , and as an external validation set for assessing a model's generalizability 41 . The 3D volumes in this dataset have been curated to only contain clean XYZ imaging in order to ensure correct and consistent annotations, or segmentations, which has been observed to be integral to the evaluation of machine learning models 42 .…”
mentioning
confidence: 99%