Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2016
DOI: 10.1145/2939672.2939756
|View full text |Cite
|
Sign up to set email alerts
|

Extreme Multi-label Loss Functions for Recommendation, Tagging, Ranking & Other Missing Label Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
285
0
1

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 240 publications
(286 citation statements)
references
References 21 publications
0
285
0
1
Order By: Relevance
“…Baselines: XReg was compared to leading extreme classifiers such as PfastreXML [26], Parabel [46], DiSMEC [5] and ProXML [6], traditional multivariate regressors such as one-vs-all least-squares regression (1-vs-all-LS) and LEML [67], and a popular pairwise Table 2: XReg achieves the best or close to the best ranking and regression performance in both pointwise ("-p") and labelwise ("-l") prediction settings. Re-ranking with tail classifiers (XReg-t) further improves the performance in many cases.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Baselines: XReg was compared to leading extreme classifiers such as PfastreXML [26], Parabel [46], DiSMEC [5] and ProXML [6], traditional multivariate regressors such as one-vs-all least-squares regression (1-vs-all-LS) and LEML [67], and a popular pairwise Table 2: XReg achieves the best or close to the best ranking and regression performance in both pointwise ("-p") and labelwise ("-l") prediction settings. Re-ranking with tail classifiers (XReg-t) further improves the performance in many cases.…”
Section: Methodsmentioning
confidence: 99%
“…Slice only works on low-dimensional embeddings and does not scale to high-dimensional bag-of-words features used in this paper. Some extreme classifiers like PfastreXML [26] and LEML [67] can be easily adapted to learn from any relevance weights, but they tend to be inaccurate and inefficient since they train a large ensemble of weak trees and inaccurate low-dimensional projections with linear reconstruction time, respectively.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations