2021
DOI: 10.1007/s12194-021-00630-6
|View full text |Cite
|
Sign up to set email alerts
|

Effects of sample size and data augmentation on U-Net-based automatic segmentation of various organs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(15 citation statements)
references
References 26 publications
0
15
0
Order By: Relevance
“…For example, because the code contains only random cropping and side-to-side flipping, we expect that additional data augmentation by simply replicating input images will improve performance. 20 Additionally, by changing the dataset files in the cloud, the U-Net framework based on pix2pix can be applied to other ocular imaging domains and other tasks.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, because the code contains only random cropping and side-to-side flipping, we expect that additional data augmentation by simply replicating input images will improve performance. 20 Additionally, by changing the dataset files in the cloud, the U-Net framework based on pix2pix can be applied to other ocular imaging domains and other tasks.…”
Section: Discussionmentioning
confidence: 99%
“…A previous study described that the performance of U-Net stabilized after training more than 200 radiographic images and data augmentation provided an additional gain in segmentation performance. 20 The exploration of further data or augmentation techniques may be required in future work. Second, only three ophthalmologists performed annotations using polygons or free-hand drawing.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…To predict the Dice scores of the three deep learning models with a large training set, we evaluated two regression models for learning curves using Eqs. ( 5 ) 29 and ( 6 ) 28 . We used the Akaike information criterion corrected for a finite number of samples (AIC c ) in Eq.…”
Section: Experimental Methodsmentioning
confidence: 99%
“…There has been no previous head-to-head comparison as a function of the number of training samples. Second, we evaluated the number of training samples required for such methods, including regression analyses for learning curves 28 , 29 . Third, we tested for the best approach to improve segmentation, using a GPU with a large amount of RAM (48 GB).…”
Section: Introductionmentioning
confidence: 99%