2022
DOI: 10.1007/978-3-031-19818-2_5
|View full text |Cite
|
Sign up to set email alerts
|

TransFGU: A Top-Down Approach to Fine-Grained Unsupervised Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(3 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…Similarly, PiCIE (Cho et al 2021) exploits multiple photometric and geometric transformations to learn consistent class representations. Another line of research is the k-means clustering-based methods (Caron et al 2018;Cho et al 2021;Yin et al 2022) that generate pixel-level pseudo-labels to train the model with a classification loss. TransFGU (Yin et al 2022) obtains high-level semantic information from feature clustering and generates pseudo-labels from Grad-CAM (Selvaraju et al 2017).…”
Section: Unsupervised Semantic Segmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…Similarly, PiCIE (Cho et al 2021) exploits multiple photometric and geometric transformations to learn consistent class representations. Another line of research is the k-means clustering-based methods (Caron et al 2018;Cho et al 2021;Yin et al 2022) that generate pixel-level pseudo-labels to train the model with a classification loss. TransFGU (Yin et al 2022) obtains high-level semantic information from feature clustering and generates pseudo-labels from Grad-CAM (Selvaraju et al 2017).…”
Section: Unsupervised Semantic Segmentationmentioning
confidence: 99%
“…Feature dimension reduction has been widely used in many fields of designing information bottleneck architecture (Goldfeld and Polyanskiy 2020;Tishby, Pereira, and Bialek 2000). This architecture has proven to be useful in a wide range of applications, including image classification (He et al 2015), latent representation learning (Kingma and Welling 2013), model compression (Yu et al 2017), and lightweight model adaptation (Hu et al 2022).…”
Section: Information Bottleneckmentioning
confidence: 99%
See 1 more Smart Citation