2019
DOI: 10.1007/978-3-030-32245-8_26
|View full text |Cite
|
Sign up to set email alerts
|

Resource Optimized Neural Architecture Search for 3D Medical Image Segmentation

Abstract: Neural Architecture Search (NAS), a framework which automates the task of designing neural networks, has recently been actively studied in the field of deep learning. However, there are only a few NAS methods suitable for 3D medical image segmentation. Medical 3D images are generally very large; thus it is difficult to apply previous NAS methods due to their GPU computational burden and long training time. We propose the resource-optimized neural architecture search method which can be applied to 3D medical se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(24 citation statements)
references
References 12 publications
0
24
0
Order By: Relevance
“…e situation is much more complicated, and the core is to solve the problem of efficiency and quality comprehensively. Efficiency, in this case, is how to use as few resources as possible (computer equipment, network bandwidth, and time) to complete a predetermined amount of web page collection [17,18]. In the occasion of bulk collection, usually considering about half a month to collect the web page is naturally the more the better.…”
Section: Architecture Design and Model Buildingmentioning
confidence: 99%
“…e situation is much more complicated, and the core is to solve the problem of efficiency and quality comprehensively. Efficiency, in this case, is how to use as few resources as possible (computer equipment, network bandwidth, and time) to complete a predetermined amount of web page collection [17,18]. In the occasion of bulk collection, usually considering about half a month to collect the web page is naturally the more the better.…”
Section: Architecture Design and Model Buildingmentioning
confidence: 99%
“…We adopt the same image pre-processing strategy in [7]. Since the annotation of test datasets are not publicly available, we report the 5-fold cross-validation results as in [1,7,8]. We also report the validation results on the 2D lesion segmentation dataset released by the Skin Lesion Segmentation and Classification 2018 challenge [5], which provides 2594 training images.…”
Section: Datasets and Settingsmentioning
confidence: 99%
“…When we finish searching and pruning the network, we retrain the derived network from scratch. The computation of UXNet is cheap by training 1.5 days on two TitanXP GPUs for brain task, which is cheaper than RONASMIS [1] that trained 3.1 days on one RTX 2080Ti GPU and SCNAS [8] that trained one day on four V100 GPUs. Please refer to appendix for more training details of each dataset.…”
Section: Datasets and Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…AutoSNAP combines the flexibility of NASNet with the speed of DARTS by introducing an intuitive yet succinct representation (instead of NAS units) and improving the efficient search and optimization strategy. The medical imaging community has recently confirmed the potential of NAS methods to segmentation [3,[16][17][18] with adaptations for scalable [6] and resource-constrained [1] environments. We are not aware of any application of NAS to CAI.…”
Section: Introductionmentioning
confidence: 98%