Fourteenth ACM Conference on Recommender Systems 2020
DOI: 10.1145/3383313.3412249
|View full text |Cite
|
Sign up to set email alerts
|

What does BERT know about books, movies and music? Probing BERT for Conversational Recommendation

Abstract: Heavily pre-trained transformer models such as BERT have recently shown to be remarkably powerful at language modelling by achieving impressive results on numerous downstream tasks. It has also been shown that they are able to implicitly store factual knowledge in their parameters after pre-training. Understanding what the pre-training procedure of LMs actually learns is a crucial step for using and improving them for Conversational Recommender Systems (CRS). We first study how much off-the-shelf pre-trained B… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
33
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 57 publications
(33 citation statements)
references
References 56 publications
0
33
0
Order By: Relevance
“…Similarly, (Petroni et al, 2019) presents an in-depth analysis of relational knowledge present in pre-trained LMs. Penha and Hauff (2020) probe the contextual LMs (BERT and RoBERTa) for the conversational recommendation of books, movies, and music. Our work seeks to apply the idea of probing to a relatively unexplored area of affect analysis.…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, (Petroni et al, 2019) presents an in-depth analysis of relational knowledge present in pre-trained LMs. Penha and Hauff (2020) probe the contextual LMs (BERT and RoBERTa) for the conversational recommendation of books, movies, and music. Our work seeks to apply the idea of probing to a relatively unexplored area of affect analysis.…”
Section: Related Workmentioning
confidence: 99%
“…There has also been research on probing LM on application specific representations such as question answering (van Aken et al, 2019), information retrieval (Yilmaz et al, 2019), recommendation systems (Penha and Hauff, 2020), dialog systems (Wu and Xiong, 2020) etc. The entire spectrum of these works aims to understand the learning capability and properties encoded in the LM along with discovering their shortcomings.…”
Section: Related Workmentioning
confidence: 99%
“…Conversational recommendation focus on combining the recommendation system with online conversation to capture user preference (Fu et al, 2020;Sun and Zhang, 2018;. Previous works mostly focus on learning the agent side policy to ask the right questions and make accurate recommendations, such as Li et al, 2020;Penha and Hauff, 2020). Chit-Chat (Adiwardana et al, 2020;Roller et al, 2020) is the most free form dialogue but almost with no knowledge grounding or state tracking.…”
Section: Related Workmentioning
confidence: 99%