2019
DOI: 10.1007/978-3-030-34356-9_35
|View full text |Cite
|
Sign up to set email alerts
|

Training Multiscale-CNN for Large Microscopy Image Classification in One Hour

Abstract: Existing approaches to train neural networks that use large images require to either crop or down-sample data during pre-processing, use small batch sizes, or split the model across devices mainly due to the prohibitively limited memory capacity available on GPUs and emerging accelerators. These techniques often lead to longer time to convergence or time to train (TTT), and in some cases, lower model accuracy. CPUs, on the other hand, can leverage significant amounts of memory. While much work has been done on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…In order to improve accuracy and reduce overfitting, we performed five augmentations on each image of the training set as follows and as previously described 57 : (1) 90° rotation, (2) one horizontal mirror, (3) one vertical mirror, (4) 90° rotation of horizontal mirror and (5) 90° rotation of vertical mirror. This results in a sixfold increase of the number of available images for training (five rotations + original).…”
Section: Methodsmentioning
confidence: 99%
“…In order to improve accuracy and reduce overfitting, we performed five augmentations on each image of the training set as follows and as previously described 57 : (1) 90° rotation, (2) one horizontal mirror, (3) one vertical mirror, (4) 90° rotation of horizontal mirror and (5) 90° rotation of vertical mirror. This results in a sixfold increase of the number of available images for training (five rotations + original).…”
Section: Methodsmentioning
confidence: 99%
“…In order to improve accuracy and reduce overfitting, we performed five augmentations on each image of the training set as follows and as previously described 49 : 1) 90 degrees rotation, 2) one horizontal mirror, 3) one vertical mirror, 4) 90 degrees rotation of horizontal mirror and 5) 90 degrees rotation of vertical mirror. This results in a 6-fold increase of the number of available images for training (five rotations + original).…”
Section: Data Augmentationmentioning
confidence: 99%