2021
DOI: 10.3390/healthcare9080938
|View full text |Cite
|
Sign up to set email alerts
|

Loss Weightings for Improving Imbalanced Brain Structure Segmentation Using Fully Convolutional Networks

Abstract: Brain structure segmentation on magnetic resonance (MR) images is important for various clinical applications. It has been automatically performed by using fully convolutional networks. However, it suffers from the class imbalance problem. To address this problem, we investigated how loss weighting strategies work for brain structure segmentation tasks with different class imbalance situations on MR images. In this study, we adopted segmentation tasks of the cerebrum, cerebellum, brainstem, and blood vessels f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(14 citation statements)
references
References 32 publications
(58 reference statements)
0
14
0
Order By: Relevance
“…Our intuition in this regard is that this is due, mainly, to the class imbalance problem and lack of enough data to train the model for those structures specifically. For this reason, it is important in the future to explore other methods that allow addressing this problem, for example, improving the calculation of the weights of the classes used in the loss functions similar to what is performed in [ 74 ], or using additional data augmentation techniques to increase the samples of classes with less information. Another factor that we considered in the analysis is the fact that deep learning methods based on transformers lack the inductive biases inherent in CNNs requiring large amounts of data to be able to generalize well [ 77 ], so their usage in small-size medical datasets remains difficult without any internal modification in their self-attention module.…”
Section: Discussion and Future Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Our intuition in this regard is that this is due, mainly, to the class imbalance problem and lack of enough data to train the model for those structures specifically. For this reason, it is important in the future to explore other methods that allow addressing this problem, for example, improving the calculation of the weights of the classes used in the loss functions similar to what is performed in [ 74 ], or using additional data augmentation techniques to increase the samples of classes with less information. Another factor that we considered in the analysis is the fact that deep learning methods based on transformers lack the inductive biases inherent in CNNs requiring large amounts of data to be able to generalize well [ 77 ], so their usage in small-size medical datasets remains difficult without any internal modification in their self-attention module.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Even the size difference between the structures and the background is usually significant. Therefore, multiple loss functions and weighting strategies for loss functions were proposed for improving imbalanced brain structure segmentation [ 74 ]. In the proposed approach, we used a combination of Dice Loss [ 75 ] and Focal Loss [ 76 ].…”
Section: Methodsmentioning
confidence: 99%
“…The former could be addressed by scanning with smaller voxel dimensions [3] or by resampling scans into smaller voxel dimensions during the preprocessing steps. The latter could be addressed by implementing a class balancing scheme according to the pixel-wise frequency of each class in the dataset [52]. Since the goal of the current paper was to assess the value of synth-DECT scans, we did not implement class balancing schemes to mitigate the errors found at the margins of the scans.…”
Section: Discussionmentioning
confidence: 99%
“…To reduce memory consumption without impacting performance, we selected a basis of 24 filters of 3 × 3 − 24 for the first layer, 48 for the second and so on, as proposed in [23]. The loss function (L) was a combination of binary cross-entropy (L BCE ) [24] and Dice loss (L DL ) [25] which is demonstrated to be well suited for imbalanced structure segmentation [26]. L was defined as:…”
Section: Ai Training Frameworkmentioning
confidence: 99%