2021
DOI: 10.48550/arxiv.2108.02266
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Boosting Few-shot Semantic Segmentation with Transformers

Abstract: Due to the fact that fully supervised semantic segmentation methods require sufficient fully-labeled data to work well and can not generalize to unseen classes, few-shot segmentation has attracted lots of research attention. Previous arts extract features from support and query images, which are processed jointly before making predictions on query images. The whole process is based on convolutional neural networks (CNN), leading to the problem that only local information is used. In this paper, we propose a TR… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 49 publications
(87 reference statements)
1
8
0
Order By: Relevance
“…Inspired by few-shot learning paradigm [48,57], which aims to learn-to-learn a model for a novel task with only a limited number of samples, fewshot segmentation has received considerable attention. Following the success of [54], prototypical networks [57] and numerous other works [8,25,30,32,43,55,59,68,[75][76][77]82] proposed to utilize a prototype extracted from support samples, which is used to refine the query features to contain the relevant support information. In addition, inspired by [80] that observed the use of high-level features leads to a performance drop, [62] proposed to utilize high-level features by computing a prior map which takes maximum score within a correlation map.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Inspired by few-shot learning paradigm [48,57], which aims to learn-to-learn a model for a novel task with only a limited number of samples, fewshot segmentation has received considerable attention. Following the success of [54], prototypical networks [57] and numerous other works [8,25,30,32,43,55,59,68,[75][76][77]82] proposed to utilize a prototype extracted from support samples, which is used to refine the query features to contain the relevant support information. In addition, inspired by [80] that observed the use of high-level features leads to a performance drop, [62] proposed to utilize high-level features by computing a prior map which takes maximum score within a correlation map.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, inspired by [80] that observed the use of high-level features leads to a performance drop, [62] proposed to utilize high-level features by computing a prior map which takes maximum score within a correlation map. Many variants [59,78] extended this idea of utilizing prior maps to guide the feature learning.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations