2018
DOI: 10.1007/978-3-030-00934-2_94
|View full text |Cite
|
Sign up to set email alerts
|

3D Anisotropic Hybrid Network: Transferring Convolutional Features from 2D Images to 3D Anisotropic Volumes

Abstract: While deep convolutional neural networks (CNN) have been successfully applied for 2D image analysis, it is still challenging to apply them to 3D anisotropic volumes, especially when the within-slice resolution is much higher than the between-slice resolution and when the amount of 3D volumes is relatively small. On one hand, direct learning of CNN with 3D convolution kernels suffers from the lack of data and likely ends up with poor generalization; insufficient GPU memory limits the model size or representatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
113
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 138 publications
(115 citation statements)
references
References 18 publications
1
113
0
1
Order By: Relevance
“…The first two convolutional layers adopt a kernel size 7×7×1 with stride [2, 2, 1] and 1×1×3 with stride [1, 1, 1]. The overall network architecture is effectively verified by [13] while we add the searching process for color blocks to choose between 2D, 3D, and P3D.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The first two convolutional layers adopt a kernel size 7×7×1 with stride [2, 2, 1] and 1×1×3 with stride [1, 1, 1]. The overall network architecture is effectively verified by [13] while we add the searching process for color blocks to choose between 2D, 3D, and P3D.…”
Section: Methodsmentioning
confidence: 99%
“…3, we define 3 Decoder cells, composed of the 2D Decoder D 0 , 3D Decoder D 1 , and P3D Decoder D 2 . The Decoder cell is defined as dense blocks, which shows powerful representation ability in [8,13]. The input of the b-th Decoder cell is denoted as x b while the output as x b+1 , which is the input of the (b + 1)-th Decoder cell.…”
Section: Decoder Search Spacementioning
confidence: 99%
See 1 more Smart Citation
“…Given all pairs of images X and pseudo labelsŶ , we re-sample them to 1 mm 3 isotropic resolution and train an ensemble E of n fully convolutional neural networks to segment the given foreground classes, with P (X)=E(X) standing for the softmax output probability maps for the different classes in the image. Our network architectures follow the encoder-decoder network proposed in [15], named AH-Net, and [5] based on the popular 3D U-Net architecture [3] with residual connections [16], named SegResNet. For training and implementing these neural networks, we used the NVIDIA Clara Train SDK 1 and NVIDIA Tesla V100 GPU with 16 GB memory.…”
Section: Deep Learning Based Segmentation With Noisy Labelsmentioning
confidence: 99%
“…For training and implementing these neural networks, we used the NVIDIA Clara Train SDK 1 and NVIDIA Tesla V100 GPU with 16 GB memory. As in [15], we initialize AH-Net from ImageNet pretrained weights using a ResNet-18 encoder branch, utilizing anisotropic (3×3×1) kernels in the encoder path in order to make use of pretrained weights from 2D computer vision tasks. While the initial weights are learned from 2D, all convolutions are still applied in a full 3D fashion throughout the network, allowing it to efficiently learn 3D features from the image.…”
Section: Deep Learning Based Segmentation With Noisy Labelsmentioning
confidence: 99%