2018
DOI: 10.1007/978-3-319-75238-9_6
|View full text |Cite
|
Sign up to set email alerts
|

Generalised Wasserstein Dice Score for Imbalanced Multi-class Segmentation Using Holistic Convolutional Networks

Abstract: The Dice score is widely used for binary segmentation due to its robustness to class imbalance. Soft generalisations of the Dice score allow it to be used as a loss function for training convolutional neural networks (CNN). Although CNNs trained using mean-class Dice score achieve state-of-the-art results on multi-class segmentation, this loss function does neither take advantage of inter-class relationships nor multi-scale information. We argue that an improved loss function should balance misclassifications … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
80
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
3

Relationship

3
6

Authors

Journals

citations
Cited by 108 publications
(81 citation statements)
references
References 23 publications
1
80
0
Order By: Relevance
“…As a pre-processing, each image was normalized by the mean value and standard deviation. The Dice loss function [25,9] was used for training. At test time, the augmented prediction number was set to N = 20 for all the network structures.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…As a pre-processing, each image was normalized by the mean value and standard deviation. The Dice loss function [25,9] was used for training. At test time, the augmented prediction number was set to N = 20 for all the network structures.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…C l equals to 2 in our method. A combination of predictions from multiple scales has also been used in [31,9]. Multi-view Fusion.…”
Section: Anisotropic Convolutional Neural Networkmentioning
confidence: 99%
“…HighRes3DNet [20] proposes a compact end-to-end 3D CNN structure that maintains high-resolution multi-scale features with dilated convolution and residual connection [29,6]. Other works also propose using fully convolutional networks [13,8], incorporating large visual contexts by employing a mixture of convolution and downsampling operations [16,12], and handling imbalanced training data by designing new loss functions [9,27] and sampling strategies [27].…”
Section: Introductionmentioning
confidence: 99%
“…For instance, in tumour or MS lesion segmentation, obtaining good generalisation is a challenge as most of the lesions are smaller than the entire volume. Two-phase training [24], careful patch selection [22,25,132] and loss functions [21,159,160] are among the proposed strategies to overcome this problem. Although we observed the success of CNNs; their full capacity has not yet been fully leveraged in brain MRI analysis.…”
Section: Discussion and Future Directionsmentioning
confidence: 99%