2021
DOI: 10.48550/arxiv.2106.12657
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Extreme Multi-label Learning for Semantic Matching in Product Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 34 publications
0
1
0
Order By: Relevance
“…Extreme Multi-label Classification (XML) involves classifying instances into a set of most relevant labels from an extremely large (on the order of millions) set of all possible labels (Agrawal et al 2013;Jain, Prabhu, and Varma 2016;Babbar and Schölkopf 2017). When these instances are short text documents, many successful applications of the XML framework have been found in ranking and recommendation tasks such as prediction of Related Search on search engines (Jain et al 2019), suggestion of query phrases corresponding to short textual description of products on e-stores (Chang et al 2020a) and product-to-product recommendation using only product title (Dahiya et al 2021a,b;Saini et al 2021;Mittal et al 2021a;Chang et al 2021).…”
Section: Introductionmentioning
confidence: 99%
“…Extreme Multi-label Classification (XML) involves classifying instances into a set of most relevant labels from an extremely large (on the order of millions) set of all possible labels (Agrawal et al 2013;Jain, Prabhu, and Varma 2016;Babbar and Schölkopf 2017). When these instances are short text documents, many successful applications of the XML framework have been found in ranking and recommendation tasks such as prediction of Related Search on search engines (Jain et al 2019), suggestion of query phrases corresponding to short textual description of products on e-stores (Chang et al 2020a) and product-to-product recommendation using only product title (Dahiya et al 2021a,b;Saini et al 2021;Mittal et al 2021a;Chang et al 2021).…”
Section: Introductionmentioning
confidence: 99%