2020
DOI: 10.1007/978-3-030-58536-5_21
|View full text |Cite
|
Sign up to set email alerts
|

Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation

Abstract: This paper studies the problem of learning semantic segmentation from image-level supervision only. Current popular solutions leverage object localization maps from classifiers as supervision signals, and struggle to make the localization maps capture more complete object content. Rather than previous efforts that primarily focus on intra-image information, we address the value of cross-image semantic relations for comprehensive object pattern mining. To achieve this, two neural coattentions are incorporated i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
163
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 243 publications
(163 citation statements)
references
References 72 publications
0
163
0
Order By: Relevance
“…Datasets and Metrics: Following most of the prior works [3,4,5,6,7,10,11,12] we evaluate our method on the PAS-CAL VOC 2012 semantic segmentation benchmark [14]. It includes images with pixel-wise class labels, of which 1,464, 1,449, and 1,456 are used for training, validation, and test, respectively.…”
Section: Setupmentioning
confidence: 99%
See 2 more Smart Citations
“…Datasets and Metrics: Following most of the prior works [3,4,5,6,7,10,11,12] we evaluate our method on the PAS-CAL VOC 2012 semantic segmentation benchmark [14]. It includes images with pixel-wise class labels, of which 1,464, 1,449, and 1,456 are used for training, validation, and test, respectively.…”
Section: Setupmentioning
confidence: 99%
“…The backbone ResNet-101 is pre-trained on ImageNet. Baselines: We compare our approach with several stateof-the-art weakly-supervised image semantic segmentation methods [3,4,5,6,7,10,11,12] on training, validation, and test sets. The comparison with PSA [3] and IRNet [4] singles out the benefits of our feature propagation framework over label propagation since our GCN adopts the same affinity matrix as theirs for propagating features rather than activation Method mIoU (%) PSA [3] 59.7 SC-CAM [6] 63.4 SEAM [5] 63.6 IRNet [4] 66.5 SingleStage [7] 66.9 WSGCN-P 64.0 WSGCN-I 68.0 scores of CAMs.…”
Section: Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…Experimental results show that our complete pseudo labels have higher accuracy in Mean Intersection over Union (mIoU) than label propagation [3,4]. The net effect is that the semantic segmentation network trained with our complete pseudo labels outperforms the state-of-the-art baselines [5,6,7,10,11,12] on the PASCAL VOC 2012 dataset.…”
Section: Introductionmentioning
confidence: 98%
“…Thus, weak supervision is more applicable and desirable for deep learning techniques [ 9 ]. This observation has sparked many weakly supervised learning studies [ 10 , 11 , 12 , 13 ], while other studies have attempted semi-supervised anomaly detection using autoencoders [ 14 ].…”
Section: Introductionmentioning
confidence: 99%