2022
DOI: 10.1007/978-3-031-19781-9_16
|View full text |Cite
|
Sign up to set email alerts
|

Learning Semantic Correspondence with Sparse Annotations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
18
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(18 citation statements)
references
References 47 publications
0
18
0
Order By: Relevance
“…The capability of heavy matching networks hinges on data quantity at first, but the training data remains significantly smaller than other computer vision tasks (e.g., 1.2M images in ImageNet-1K (Russakovsky et al 2015)). Therefore, we argue that previous approaches (Laskar and Kannala 2018;Kim et al 2022;Truong et al 2022;Huang et al 2022), attempting to densify points for training, may not be an underlying solution for the data-hungry problem.…”
Section: Introductionmentioning
confidence: 92%
See 4 more Smart Citations
“…The capability of heavy matching networks hinges on data quantity at first, but the training data remains significantly smaller than other computer vision tasks (e.g., 1.2M images in ImageNet-1K (Russakovsky et al 2015)). Therefore, we argue that previous approaches (Laskar and Kannala 2018;Kim et al 2022;Truong et al 2022;Huang et al 2022), attempting to densify points for training, may not be an underlying solution for the data-hungry problem.…”
Section: Introductionmentioning
confidence: 92%
“…Recent methods for semantic correspondence (Min et al 2019a;Liu et al 2020;Li et al 2020a;Li et al 2021;Zhao et al 2021;Min et al 2020;Cho et al 2021) inevitably train complicated matching networks to maximize performance in a supervised manner with limited qualified dataset (Ham et al 2017;Min et al 2019b), which leads to high computational demands and poor generalization capability across datasets. Some unsupervised strategies (Laskar and Kannala 2018;Truong et al 2022;Kim et al 2022;Huang et al 2022) extend their unsupervised loss to the supervised regime and significantly improve the performance of the previous supervised approaches. This shows that the performance of the existing supervised model was not fully learned due to a lack of data.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations