2022
DOI: 10.48550/arxiv.2203.08414
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unsupervised Semantic Segmentation by Distilling Feature Correspondences

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(32 citation statements)
references
References 0 publications
0
32
0
Order By: Relevance
“…Another solution is to make the representations more explainable, so that the causes of mode-split can be spotted and corrected. We propose that by combining unsupervised semantic segmentation ( Cho et al, 2021 ; Hamilton et al, 2022 ; Xu et al, 2022 ) and bag-of-features (BagNet) ( Brendel and Bethge, 2019 ) training objective, it is possible to produce disentangle features that could map each dimension to pixels. Therefore, whenever mode-split happens and corresponding dimensions are identified, and human interference to either recoding setup or training hyperparameters can be applied.…”
Section: Discussionmentioning
confidence: 99%
“…Another solution is to make the representations more explainable, so that the causes of mode-split can be spotted and corrected. We propose that by combining unsupervised semantic segmentation ( Cho et al, 2021 ; Hamilton et al, 2022 ; Xu et al, 2022 ) and bag-of-features (BagNet) ( Brendel and Bethge, 2019 ) training objective, it is possible to produce disentangle features that could map each dimension to pixels. Therefore, whenever mode-split happens and corresponding dimensions are identified, and human interference to either recoding setup or training hyperparameters can be applied.…”
Section: Discussionmentioning
confidence: 99%
“…(Siméoni et al, 2021) localizes the objects with a self-supervised transformer. (Hamilton et al, 2022) introduces the feature correspondences that distinguish between different classes. Most recently, a new co-segmentation framework based on DINO feature (Amir et al, 2021) has been proposed and achieves better results on object co-segmentation and part co-segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Semantic Correspondence across Views Tremendous works have explored and demonstrated the importance of object appearance when generating compact feature correspondence across views (Hénaff et al, 2022;Li et al, 2018). This peculiarity is then utilized in self-supervised 2D semantic segmentation frameworks (Hénaff et al, 2022;Li et al, 2018;Chen et al, 2020) to generate semantic representations by selecting positive and negative pairs with either random or KNN-based rules (Hamilton et al, 2022). Drawing inspiration from these prior arts, we construct the visual feature correspondence for NeRF at the appearance using a heuristic rule.…”
Section: Cross View Appearance Correspondencementioning
confidence: 99%
See 2 more Smart Citations