2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.555
|View full text |Cite
|
Sign up to set email alerts
|

BIER — Boosting Independent Embeddings Robustly

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
125
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 137 publications
(125 citation statements)
references
References 24 publications
0
125
0
Order By: Relevance
“…The sampling of the training tuples receive plenty of attention [18,22,23], either through mining [7,22], proxy based approximations [16,17] or hard negative generation [4,13]. Finally, ensemble methods have recently become an increasingly popular way of improving the performances of DML architectures [11,19,33,35]. Our proposed HORDE regularizer is a complementary approach.…”
Section: Related Workmentioning
confidence: 99%
“…The sampling of the training tuples receive plenty of attention [18,22,23], either through mining [7,22], proxy based approximations [16,17] or hard negative generation [4,13]. Finally, ensemble methods have recently become an increasingly popular way of improving the performances of DML architectures [11,19,33,35]. Our proposed HORDE regularizer is a complementary approach.…”
Section: Related Workmentioning
confidence: 99%
“…This clearly shows the advantage of the proposed SBDL loss and its inherent property of treating training samples based on their degree of information. Our method also outperforms the attribute-aware attention model (A 3 M ) and other ensemble based method such as hard-aware cascaded (HDC) network [53], gradient-boosted BIER, A-BIER [41], [42]. Moreover, SGML outperforms the state-of-the-art attentionbased ensemble (ABE) method [1] by +1.3% Recall@1.…”
Section: Results: Deepfashion Dataset 1) Comparison With State-of-mentioning
confidence: 81%
“…Compared to these methods, SGML allows interactions between the tasks and hence inherently considers different granularities of semantic similarity. The top-performing methods such as hardaware cascaded network (HDC) [53], gradient-boosted embeddings (BIER, A-BIER) [42], [41] are recently proposed methods that used ensemble techniques to boost the retrieval performance. Different from these, SGML uses a single model while achieving Recall@1 of 91.8%, outperforming these ensemblebased methods.…”
Section: Results: Deepfashion Dataset 1) Comparison With State-of-mentioning
confidence: 99%
See 1 more Smart Citation
“…Metric learning. Our work is also related to general deep metric learning [36,56,26,37,54]. However, they only conducted experiments on InShop clothes retrieval dataset while our work focuses on customer-to-shop clothes retrieval which is much more challenging as analyzed in [34].…”
Section: Datasetsmentioning
confidence: 99%