2016
DOI: 10.1007/978-3-319-45886-1_31
|View full text |Cite
|
Sign up to set email alerts
|

Weakly-Supervised Semantic Segmentation by Redistributing Region Scores Back to the Pixels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…In terms of segmentation quality, currently only methods based on deep convolutional networks [19,33] are strong enough to tackle segmentation datasets of difficulty similar to what fully-supervised methods can handle, such as the PASCAL VOC 2012 [9], which we make use of in this work. In particular, MIL-FCN [25], MIL-ILP [26] and the approaches of [4,18] leverage deep networks in a multiple instance learning setting, differing mainly in their pooling strategies, i.e. how they convert their internal spatial representation to per-image labels.…”
Section: Related Workmentioning
confidence: 99%
“…In terms of segmentation quality, currently only methods based on deep convolutional networks [19,33] are strong enough to tackle segmentation datasets of difficulty similar to what fully-supervised methods can handle, such as the PASCAL VOC 2012 [9], which we make use of in this work. In particular, MIL-FCN [25], MIL-ILP [26] and the approaches of [4,18] leverage deep networks in a multiple instance learning setting, differing mainly in their pooling strategies, i.e. how they convert their internal spatial representation to per-image labels.…”
Section: Related Workmentioning
confidence: 99%
“…The mIoU values of each category on validation and testing datasets demonstrate the effectiveness of our method. We compare our method with some methods including SFR (Kim and Hwang, 2016), RSP (Krapac and Šegvić, 2016), CCNN, MIL-seg. The mIoU values of our method are the highest in most categories.…”
Section: Comparisons With Other Methodsmentioning
confidence: 99%