Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval 2021
DOI: 10.1145/3404835.3463069
|View full text |Cite
|
Sign up to set email alerts
|

Empowering News Recommendation with Pre-trained Language Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
42
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 98 publications
(43 citation statements)
references
References 12 publications
0
42
1
Order By: Relevance
“…Personalized news recommendation is a critical way to personalize news distribution and alleviate the information overload problem. Multiple news recommendation methods have been proposed recently [27,29,30,40,42,43,46]. Generally, there are three core components in news recommendation methods: news model, user model, and click prediction module.…”
Section: Related Work 21 Personalized News Recommendationmentioning
confidence: 99%
See 2 more Smart Citations
“…Personalized news recommendation is a critical way to personalize news distribution and alleviate the information overload problem. Multiple news recommendation methods have been proposed recently [27,29,30,40,42,43,46]. Generally, there are three core components in news recommendation methods: news model, user model, and click prediction module.…”
Section: Related Work 21 Personalized News Recommendationmentioning
confidence: 99%
“…Wu et al [45] use the combination of multi-head selfattention and additive attention to learn news representations. Wu et al [46] apply pre-trained language model in the news model to empower its semantic understanding ability. The user model is used to learn user representations from users' historical clicked news representations.…”
Section: Related Work 21 Personalized News Recommendationmentioning
confidence: 99%
See 1 more Smart Citation
“…Wu et al [156] proposed to learn news representations from news titles via a combination of multi-head self-attention and additive attention networks. Wu et al [161] studied to use pre-trained language models to encode news texts. These deep learning-based news modeling methods can automatically learn informative news representations without the need of manual feature engineering, and they can usually better understand news content than traditional feature-based methods.…”
Section: News Modelingmentioning
confidence: 99%
“…These methods usually learn news representations based on shallow text models and non-contextualized word embeddings such as GloVe [119], which may be insufficient to capture the deep semantic information in news. Pre-trained language models (PLMs) such as BERT [27] have been greatly successful in the NLP field, and a few recent works explore to empower news modeling with PLMs [161,167]. For example, PLM-NR [161] uses different PLMs to empower English and multilingual news recommendation, and the online flight results in Microsoft News showed notable performance improvement.…”
Section: Deep Learning-based News Modelingmentioning
confidence: 99%