2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01634
|View full text |Cite
|
Sign up to set email alerts
|

Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
100
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 159 publications
(101 citation statements)
references
References 37 publications
1
100
0
Order By: Relevance
“…Observe that we improved BPM state-of-the-art by 3.91 mIou% points and BPM+CRF by 5.4 mIoU%, both w.r.t. AFA [39], owning so far the best accuracy on both. B.…”
Section: Comparisons With State-of-the-artmentioning
confidence: 99%
See 4 more Smart Citations
“…Observe that we improved BPM state-of-the-art by 3.91 mIou% points and BPM+CRF by 5.4 mIoU%, both w.r.t. AFA [39], owning so far the best accuracy on both. B.…”
Section: Comparisons With State-of-the-artmentioning
confidence: 99%
“…This last is connected with the class prediction, using a BCE prediction loss. More recently, Vision Transformers [15] are emerging as an alternative to generate CAM [58,39]. Our method is the first one using only ViT without CAM to generate baseline pseudo-masks.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations