2017
DOI: 10.1109/tpami.2016.2552172
|View full text |Cite
|
Sign up to set email alerts
|

Learning from Weak and Noisy Labels for Semantic Segmentation

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
57
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 113 publications
(57 citation statements)
references
References 54 publications
0
57
0
Order By: Relevance
“…They obtained best results using region proposal algorithms to create semantic segmentation training data directly from bounding boxes. Lu et al modelled this problem as a simultaneous learning and denoising task through a convex optimization problem [31]. Ahn and Kwak proposed combining class activation maps, random walk and a learned network that predicts if pixels belong to the same region to perform semantic segmentation from image level labels [1].…”
Section: Related Workmentioning
confidence: 99%
“…They obtained best results using region proposal algorithms to create semantic segmentation training data directly from bounding boxes. Lu et al modelled this problem as a simultaneous learning and denoising task through a convex optimization problem [31]. Ahn and Kwak proposed combining class activation maps, random walk and a learned network that predicts if pixels belong to the same region to perform semantic segmentation from image level labels [1].…”
Section: Related Workmentioning
confidence: 99%
“…However, this method was proposed for image clustering and does not apply to pixel-level label-ing. Other researchers have considered techniques based on non-negative matrix factorization to infer the pixellevel labels of pre-segmented regions (known as superpixels) within different images (Niu et al, 2015;Lu et al, 2017). Furthermore, Zhang and Gong (2016) proposed a non-negative matrix co-factorization based approach that jointly learns a discriminative dictionary and a linear classifier that classifies features from segmented images into different classes.…”
Section: Weakly-supervised Labelingmentioning
confidence: 99%
“…Generally speaking, there are four types of existing methods: (I) robust learning based on probabilistic graphical models where the noisy patterns are often modeled as latent variables [30,25]; (II) progressive and self-paced learning, where easy and clean examples are learned first, whereas the hard and noisy labels are progressively considered [10]; (III) loss-correction methods, where the loss function is corrected iteratively [22]; (IV) network architecture-based method, where the noisy patterns are modeled with specifically designed modules [15]. Meanwhile, there are also some efforts on designing deep robust models for specific tasks and applications: [20] proposes a method to learn from Weak and Noisy Labels for Semantic Segmentation; [32] proposes a deep robust unsupervised method for saliency detection, etc.…”
Section: Learning With Noisy Datamentioning
confidence: 99%
“…When it comes to deep learning, it is known that several kinds of factors can drive the deep learning model away from a perfect one, with the data perturbation issue as an typical example. Besides the notorious issue coming from the crowdsourcing process, deep learning is in itself known to be more vulnerable to contaminated data since the extremely high model complexity brings extra risks to overfit the noisy/contaminated data [20,10,30,32,15,25,22]. We believe that how to guarantee the robustness is one of the biggest challenges when constructing deep SVP prediction models.…”
Section: Introductionmentioning
confidence: 99%