2022
DOI: 10.1609/aaai.v36i3.20145
|View full text |Cite
|
Sign up to set email alerts
|

Renovate Yourself: Calibrating Feature Representation of Misclassified Pixels for Semantic Segmentation

Abstract: Existing image semantic segmentation methods favor learning consistent representations by extracting long-range contextual features with the attention, multi-scale, or graph aggregation strategies. These methods usually treat the misclassified and correctly classified pixels equally, hence misleading the optimization process and causing inconsistent intra-class pixel feature representations in the embedding space during learning. In this paper, we propose the auxiliary representation calibration head (RCH), wh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 39 publications
0
2
0
Order By: Relevance
“…He et al [3,4,12] realize contrastive learning in a more efficient way using a momentum encoder and a dynamic queue. Wang et al [33] define the positive and negative samples in a supervised manner. They design a metric function loss to calibrate these misclassified feature representations for better intra-class consistency and segmentation performance.…”
Section: Contrastive Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…He et al [3,4,12] realize contrastive learning in a more efficient way using a momentum encoder and a dynamic queue. Wang et al [33] define the positive and negative samples in a supervised manner. They design a metric function loss to calibrate these misclassified feature representations for better intra-class consistency and segmentation performance.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…2 shows, we conduct the feature refinement in a manner of contrastive learning. The idea is inspired by [33]. Each sample will be refined by both its ground truth actions and other ambiguous actions.…”
Section: Contrastive Feature Refinementmentioning
confidence: 99%