2021
DOI: 10.48550/arxiv.2111.11294
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Scaling Law for Recommendation Models: Towards General-purpose User Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…However, these models are all based on the shared-ID assumption in source and target domains, which tends to be difficult to hold in practice. Distinct from these work, some preprint paper [34,35,41] devised general-purpose gpRS by leveraging the textual information. [39] learned user representations based on text and image modalities with images processed into frozen features beforehand.…”
Section: Related Workmentioning
confidence: 99%
“…However, these models are all based on the shared-ID assumption in source and target domains, which tends to be difficult to hold in practice. Distinct from these work, some preprint paper [34,35,41] devised general-purpose gpRS by leveraging the textual information. [39] learned user representations based on text and image modalities with images processed into frozen features beforehand.…”
Section: Related Workmentioning
confidence: 99%
“…Training deep networks is a canonical task in modern machine learning. Training and serving deep networks with very large parameter counts efficiently is of paramount importance in recent years, since significant performance gains on a variety of tasks are possible simply by scaling up parameter counts [Rae et al, 2021, Shin et al, 2021, Gordon et al, 2021, Sharma and Kaplan, 2020. Further theoretical evidence suggests that requiring the resulting learned networks to be smooth functions (and therefore, in some sense, robust to perturbations) entails even larger parameter counts [Bubeck and Sellke, 2021].…”
Section: Introductionmentioning
confidence: 99%