2016
DOI: 10.1016/j.patcog.2016.01.015
|View full text |Cite
|
Sign up to set email alerts
|

Learning to segment with image-level annotations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
70
0

Year Published

2016
2016
2018
2018

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 96 publications
(70 citation statements)
references
References 22 publications
0
70
0
Order By: Relevance
“…expectation maximization (EM) [33] , curriculum learning [34] , self-paced learning [35] , etc. ) are widely used in the weakly-supervised tasks [9,[36][37][38][39][40][41] . For example, [36] adopts the expectation maximization (EM) algorithm to dynamically predict semantic foreground and background pixels by using an alternative training procedure.…”
Section: Iterative Learning Methodsmentioning
confidence: 99%
“…expectation maximization (EM) [33] , curriculum learning [34] , self-paced learning [35] , etc. ) are widely used in the weakly-supervised tasks [9,[36][37][38][39][40][41] . For example, [36] adopts the expectation maximization (EM) algorithm to dynamically predict semantic foreground and background pixels by using an alternative training procedure.…”
Section: Iterative Learning Methodsmentioning
confidence: 99%
“…To improve the localization performance, some approaches [58][59][60][61] have proposed to exploit the notion of objectness, either by incorporating it in the loss function [58,59], or by employing pre-trained network as external objectness module [60,61]. Another promising way to improve the segmentation performance is to utilize additional weakly supervised images, such as web images, to train CNNs, such as [62,63].…”
Section: Weakly Supervised Semantic Segmentationmentioning
confidence: 99%
“…Here we review the CNN-based approaches as these methods [13,14,15,16,17,18,19,20,21,22] provide good segmentation quality on the challenging PASCAL VOC benchmark. Early works [13,14,16] extend the MultipleInstance Learning (MIL) [24] framework for weakly supervised semantic segmentation, where the loss functions are built on the image tags level.…”
Section: Related Workmentioning
confidence: 99%
“…Image tags represent which object class(es) are present in the image. They are usually much easier and faster to obtain than the other human annotations described above, and have thus been used in many weakly supervised semantic segmentation methods [13,14,15,16,17,18,19,20,21,22]. However, as opposed to the segmentation masks obtained by the pixel-wise annotations, image tags do not indicate the location of the object(s) in the image, therefore making semantic segmentation much more challenging.…”
Section: Introductionmentioning
confidence: 99%