Proceedings of the 3rd ACM SIGCAS Conference on Computing and Sustainable Societies 2020
DOI: 10.1145/3378393.3402254
|View full text |Cite
|
Sign up to set email alerts
|

Learning to segment from misaligned and partial labels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…Existing work [17] uses OSM data to generate the annotations for buildings in satellite imagery. To correct the misaligned annotations from the OSM data, the method leverages a small set of manually corrected annotations to train an alignment correction network (ACN) before training a semantic segmentation model.…”
Section: Related Workmentioning
confidence: 99%
“…Existing work [17] uses OSM data to generate the annotations for buildings in satellite imagery. To correct the misaligned annotations from the OSM data, the method leverages a small set of manually corrected annotations to train an alignment correction network (ACN) before training a semantic segmentation model.…”
Section: Related Workmentioning
confidence: 99%
“…"Noise model based" approaches seek to estimate underlying noise structures in order to de-emphasize, relabel, or remove noisy labels so that the model does not learn from them, while "noise model free" approaches exploit noisy labels to improve robustness, for example, to speed up gradient descent through hard example mining or to avoid overfitting (Chang et al, 2017). There exist many different model architectures and loss functions for dealing with noisy labels across these categories (Mnih and Hinton, 2012;Fobi et al, 2020;Kang et al, 2020;Kang et al, 2021). This paper falls into the "noise model based" category, aiming to identify a noisy label distribution for potential relabeling.At the same time, as interpretability becomes a major focus in the field of Deep Learning, research on predictive uncertainty is on the rise, since critical applications of deep learning models require uncertainty measures such as confidence estimates to interpret and trust model predictions (Henne et al, 2020).…”
mentioning
confidence: 99%