Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2022
DOI: 10.1145/3534678.3539194
|View full text |Cite
|
Sign up to set email alerts
|

Learning Backward Compatible Embeddings

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…This has led rise to the study of compatible representation learning. Shen et al (2020); Budnik & Avrithis (2021); ; Ramanujan et al (2022); Hu et al (2022); Zhao et al (2022); Duggal et al (2021) all proposed methods to update the embedding model to achieve better performance whilst still being compatible with features generated by the old model (see Figure 1-left-top). Despite relative success, compatibility learning is not perfect: performing retrieval with a mixture of old and new features achieves lower accuracies than when we replace all the old features with new ones.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This has led rise to the study of compatible representation learning. Shen et al (2020); Budnik & Avrithis (2021); ; Ramanujan et al (2022); Hu et al (2022); Zhao et al (2022); Duggal et al (2021) all proposed methods to update the embedding model to achieve better performance whilst still being compatible with features generated by the old model (see Figure 1-left-top). Despite relative success, compatibility learning is not perfect: performing retrieval with a mixture of old and new features achieves lower accuracies than when we replace all the old features with new ones.…”
Section: Introductionmentioning
confidence: 99%
“…In this work, we focus on closing this performance gap. Further, some previous methods degrade the new model performance when trying to make it more compatible with the old model (Shen et al, 2020;Hu et al, 2022) or requiring availability of side-information, extra features from a separate self-supervised model, (Ramanujan et al, 2022) (which may not be available for an existing system). We relax both constraints in this work.…”
Section: Introductionmentioning
confidence: 99%