2020
DOI: 10.1007/978-3-030-59719-1_51
|View full text |Cite
|
Sign up to set email alerts
|

Universal Loss Reweighting to Balance Lesion Size Inequality in 3D Medical Image Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 17 publications
(27 citation statements)
references
References 18 publications
1
26
0
Order By: Relevance
“…The inference time is typically less than 5 seconds. On the hold-out dataset, our model achieves 0.9 Recall level at the 3.8 False Positive predictions per image and 0.71 object-wise Dice Score [18].…”
Section: B Deep Learning Modelmentioning
confidence: 95%
See 2 more Smart Citations
“…The inference time is typically less than 5 seconds. On the hold-out dataset, our model achieves 0.9 Recall level at the 3.8 False Positive predictions per image and 0.71 object-wise Dice Score [18].…”
Section: B Deep Learning Modelmentioning
confidence: 95%
“…In [15], the improved patch sampling technique was proposed. Several works [16], [17], [18] proposed to reweight a loss function to achieve higher segmentation scores.…”
Section: A Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In [13], the improved patch sampling technique was proposed. Finally, several works [14], [15], [16] proposed to reweight a loss function to achieve higher segmentation scores.…”
Section: A Related Workmentioning
confidence: 99%
“…Thus, Tumor Sampling considerably increases the object-wise Recall of the model, especially for the small tumors, which is more carefully evaluated in [14]. Secondly, we supplement the Binary Cross-Entropy (BCE) loss function with the inverse weighting strategy [16]. Inverse weighting assigns larger weights to smaller lesions (inversely proportional to the lesion volume), thus further increasing the CNN's ability to detect small lesions.…”
Section: Deep Learning Modelmentioning
confidence: 99%