Proceedings of the 25th ACM International Conference on Multimedia 2017
DOI: 10.1145/3123266.3123394
|View full text |Cite
|
Sign up to set email alerts
|

Learning Fashion Compatibility with Bidirectional LSTMs

Abstract: The ubiquity of online fashion shopping demands effective recommendation services for customers. In this paper, we study two types of fashion recommendation: (i) suggesting an item that matches existing components in a set to form a stylish outfit (a collection of fashion items), and (ii) generating an outfit with multimodal (images/text) specifications from a user. To this end, we propose to jointly learn a visual-semantic embedding and the compatibility relationships among fashion items in an end-to-end fash… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
408
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 336 publications
(410 citation statements)
references
References 37 publications
2
408
0
Order By: Relevance
“…Recommendation and Retrieval. Similarity learning has also been used extensively to solve computer vision problems in other domains such as fashion and retail (e.g., [12,35,37]). Using visual attributes is a naturally intuitive way to describe fashion items (e.g.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Recommendation and Retrieval. Similarity learning has also been used extensively to solve computer vision problems in other domains such as fashion and retail (e.g., [12,35,37]). Using visual attributes is a naturally intuitive way to describe fashion items (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…As such, identifying relevant attributes in visual representations of fashion items is essential to reasoning about similarity between them. The deficiency of comparing images by projecting them into a general embedding space as described above is especially apparent in prior work on modeling fashion outfit compatibility [21,12,35,37]. In their approach, Veit et al [37] do not distinguish items by their types but instead attempt to learn the concepts of compatibility and similarity from heterogeneous dyadic cooccurrences of items in user data.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Li et al [26] use a Recurrent Neural Network (RNN) to predict whether an outfit is popular, which also implicitly learns the compatibility relation between fashion items. Han et al [11] further train a Bi-LSTM to sequentially predict the next item conditioned on the previous ones for learning their compatibility relationship. Song et al [41] employ a dual auto-encoder network to learn the latent compatibility space where they use the BPR model to jointly model the relation between visual and contextual modalities and implicit preferences among fashion items.…”
Section: Visual Matchingmentioning
confidence: 99%