2017
DOI: 10.1007/978-3-319-66179-7_27
|View full text |Cite
|
Sign up to set email alerts
|

Error Corrective Boosting for Learning Fully Convolutional Networks with Limited Data

Abstract: Abstract. Training deep fully convolutional neural networks (F-CNNs) for semantic image segmentation requires access to abundant labeled data. While large datasets of unlabeled image data are available in medical applications, access to manually labeled data is very limited. We propose to automatically create auxiliary labels on initially unlabeled data with existing tools and to use them for pre-training. For the subsequent fine-tuning of the network with manually labeled data, we introduce error corrective b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
86
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 76 publications
(87 citation statements)
references
References 16 publications
1
86
0
Order By: Relevance
“…To our knowledge, Roy et al [21] created the first architecture that, by combining U-Net and SegNet, segmented the whole brain slice by slice and opted for simply applying 2D convolutions. Error corrective boosting was the main key to achieve good performance but, apart from this, they did not need any additional input to get the desired results.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…To our knowledge, Roy et al [21] created the first architecture that, by combining U-Net and SegNet, segmented the whole brain slice by slice and opted for simply applying 2D convolutions. Error corrective boosting was the main key to achieve good performance but, apart from this, they did not need any additional input to get the desired results.…”
Section: Discussionmentioning
confidence: 99%
“…2b), on the other hand, train end-to-end and voxel-tovoxel in each slice. Compared to the voxel-wise CNN technique, they generate the entire segmentation from the slice input, which can better use and preserve neighborhood information in the predicted segmentation, and receives as an input the whole image [30,21,22,29]. It is also possible to extract and use patches from the whole image, resulting in the segmentation of the whole patch that is given as input.…”
Section: Related Workmentioning
confidence: 99%
“…To overcome the class imbalance, a median frequency balancing is used in the class weights wcfalse(boldxfalse) of the Dice loss to compensate for classes with low occupation probability. Furthermore, a boundary compensation introduced in is implemented in the class weights to increase the weights on the anatomical boundaries for contour correction. Thus, the class weights wcfalse(boldxfalse) are composed of two terms: the median frequency balancing and boundary compensation: wcfalse(boldxfalse)=median(f)fc+λ·Ifalse(false|â–œGcfalse(boldxfalse)false|>0false)where scalar fc is the class probability, that is, the occupation frequency of class c in the training data.…”
Section: Methodsmentioning
confidence: 99%
“…For both datasets, we trained the models with a composite equally-balanced loss function comprised of Dice loss and weighted cross entropy. The weights were labels full precision TernaryNet 3DQ computed with median frequency balancing [16] to circumvent class imbalance. We used an Adam optimizer, initialized with learning rate 0.0001 for 3D U-Net and 0.00005 for V-Net.…”
Section: Methodsmentioning
confidence: 99%