2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.770
|View full text |Cite
|
Sign up to set email alerts
|

Combining Bottom-Up, Top-Down, and Smoothness Cues for Weakly Supervised Image Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
75
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 112 publications
(75 citation statements)
references
References 30 publications
0
75
0
Order By: Relevance
“…Weakly supervised segmentation: We compared our method with other recently introduced weakly supervised semantic segmentation methods with various levels of supervision. [24] 10K 52.8 53.7 TPL ICCV '17 [14] 10K 53.1 53.8 AE_PSL CVPR '17 [31] 10K 55.0 55.7 DCSP BMVC '17 [2] 10K 58.6 59.2 MEFF CVPR '18 [8] 10K -55.6 GAIN CVPR '18 [19] 10K 55.3 56.8 MCOF CVPR '18 [30] 10K 56.2 57.6 AffinityNet CVPR '18 [1] 10K 58.4 60.5 DSRG CVPR '18 [12] 10K 59.0 60.4 MDC CVPR '18 [33] 10K 60.4 60.8 FickleNet (Ours) 10K 61.2 61.9 we do not need additional training steps or additional networks, in contrast to many other recent techniques, such as AffinityNet [1], which requires an additional network for learning semantic affinities, or AE-PSL [31] and MDC [33], which require several training steps. Table 2 shows result on PASCAL VOC 2012 images with a ResNet-based segmentation network.…”
Section: Comparison To the State Of The Artmentioning
confidence: 99%
“…Weakly supervised segmentation: We compared our method with other recently introduced weakly supervised semantic segmentation methods with various levels of supervision. [24] 10K 52.8 53.7 TPL ICCV '17 [14] 10K 53.1 53.8 AE_PSL CVPR '17 [31] 10K 55.0 55.7 DCSP BMVC '17 [2] 10K 58.6 59.2 MEFF CVPR '18 [8] 10K -55.6 GAIN CVPR '18 [19] 10K 55.3 56.8 MCOF CVPR '18 [30] 10K 56.2 57.6 AffinityNet CVPR '18 [1] 10K 58.4 60.5 DSRG CVPR '18 [12] 10K 59.0 60.4 MDC CVPR '18 [33] 10K 60.4 60.8 FickleNet (Ours) 10K 61.2 61.9 we do not need additional training steps or additional networks, in contrast to many other recent techniques, such as AffinityNet [1], which requires an additional network for learning semantic affinities, or AE-PSL [31] and MDC [33], which require several training steps. Table 2 shows result on PASCAL VOC 2012 images with a ResNet-based segmentation network.…”
Section: Comparison To the State Of The Artmentioning
confidence: 99%
“…Oh et al [29] and Chaudhry et al [5] considered linking saliency and attention cues together, but they adopted different strategies to acquire semantic objects. Roy and Todorovic [36] leveraged both bottom-up and top-down attention cues and fused them via a conditional random field as a recurrent network. Ahn et al [1] use image level class labels to generate an initial set of CAMs and then propagate those CAMs by using random walk predictions from AffinityNet.…”
Section: Related Workmentioning
confidence: 99%
“…Our method achieves mIoU values of 63.9 [45] 49.8 51.2 TransferNet CVPR '16 [11] 52.1 51.2 AISI ECCV '18 [16] 61.3 62. [33] 52.8 53.7 TPL ICCV '17 [22] 53.1 53.8 AE_PSL CVPR '17 [44] 55.0 55.7 DCSP BMVC '17 [2] 58.6 59.2 MEFF CVPR '18 [9] -55.6 GAIN CVPR '18 [26] 55.3 56.8 MCOF CVPR '18 [43] 56.2 57.6 AffinityNet CVPR '18 [1] 58.4 60.5 DSRG CVPR '18 [17] 59.0 60.4 MDC CVPR '18 [46] 60.4 60.8 SeeNet NIPS '18 [15] 61.1 60.7 FickleNet CVPR '19 [24] 61.2 61.9 Ours 63.9 65.0 and 65.0 for PASCAL VOC 2012 validation and test images respectively, which is 94.4% of that of DeepLab [3], trained with fully annotated data, which achieved an mIoU of 67.6 on validation images. Our method is 3.1% better on test images than the best method which uses only image-level annotations for supervision.…”
Section: Results On Image Segmentationmentioning
confidence: 99%