2022
DOI: 10.48550/arxiv.2206.06122
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Singular Value Fine-tuning: Few-shot Segmentation requires Few-parameters Fine-tuning

Abstract: Freezing the pre-trained backbone has become a standard paradigm to avoid overfitting in few-shot segmentation. In this paper, we rethink the paradigm and explore a new regime: fine-tuning a small part of parameters in the backbone. We present a solution to overcome the overfitting problem, leading to better model generalization on learning novel classes. Our method decomposes backbone parameters into three successive matrices via the Singular Value Decomposition (SVD), then only fine-tunes the singular values… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…With a Resnet-50 backbone, we achieve results comparable to other methods with similar number of trainable parameters. The most recent work of Sun et al 48 , which achieves state-of-the-art with a Resnet-50, significantly increases the memory requirements because, according to the authors, it requires 128G for a batch 8 (16G for one image), which is significantly higher than any other few-shot semantic segmentation technique.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…With a Resnet-50 backbone, we achieve results comparable to other methods with similar number of trainable parameters. The most recent work of Sun et al 48 , which achieves state-of-the-art with a Resnet-50, significantly increases the memory requirements because, according to the authors, it requires 128G for a batch 8 (16G for one image), which is significantly higher than any other few-shot semantic segmentation technique.…”
Section: Discussionmentioning
confidence: 99%
“…See appendix for full-sized table. SVF48 46.87 53.80 48.43 44.78 48.47 52.25 57.83 51.97 53.41 53.87 BAM 12 43.41 50.59 47.49 43.42 46.23 49.26 54.20 51.63 49.55 51.16 Baseline-HSNet 36.3 43.1 38.7 39.2 39.2 43.3 51.3 48.2 45.0 46.9 Ours 42.15 53.22 49.05 48.08 48.12 47.50 59.14 53.19 51.16 52.75 HSNet 37.2 44.1 42.4 41.3 41.2 45.9 53.0 51.8 47.1 49.5 Ours 45.48 56.47 51.74 49.84 50.88 48.87 61.10 55.58 54.03 54.90…”
mentioning
confidence: 99%
“…SSA outperformed baseline state-of-the-art methods for diverse object localization tasks. Sun et al [16] introduced a novel approach to few-shot segmentation (FSS) named Singular Value Fine-tuning (SVF), which addressed the overfitting issue in FSS. In FSS, the task is to segment novel class objects with only a few densely annotated samples.…”
Section: A Segmentationmentioning
confidence: 99%
“…Y. Sun et al [31] proposed a model fine-tuning method based on singular value decomposition, which can perform fast model fine-tuning by using a very small number of parameters, thus achieving the segmentation task with few samples. Z. Li et al [32]proposed a knowledge-guided approach for few-sample image recognition.…”
Section: Related Workmentioning
confidence: 99%