Proceedings of the 31st ACM International Conference on Information &Amp; Knowledge Management 2022
DOI: 10.1145/3511808.3557417
|View full text |Cite
|
Sign up to set email alerts
|

Personalizing Task-oriented Dialog Systems via Zero-shot Generalizable Reward Function

Abstract: Task-oriented dialog systems enable users to accomplish tasks using natural language. State-of-the-art systems respond to users in the same way regardless of their personalities, although personalizing dialogues can lead to higher levels of adoption and better user experiences. Building personalized dialog systems is an important, yet challenging endeavor and only a handful of works took on the challenge. Most existing works rely on supervised learning approaches and require laborious and expensive labeled tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 58 publications
0
2
0
Order By: Relevance
“…Unsupervised Representation Learning and Pre-trained Language Models. Unsupervised (or self-supervised) latent representation learning [48] and pre-trained language models have contributed greatly to recent NLP success [10], [35], [36], [50], including facilitating zero-shot learning [47], [49]. The unsupervised learning technique enabled the development of more robust NLP systems due to the abundance of textual data.…”
Section: Preliminariesmentioning
confidence: 99%
“…Unsupervised Representation Learning and Pre-trained Language Models. Unsupervised (or self-supervised) latent representation learning [48] and pre-trained language models have contributed greatly to recent NLP success [10], [35], [36], [50], including facilitating zero-shot learning [47], [49]. The unsupervised learning technique enabled the development of more robust NLP systems due to the abundance of textual data.…”
Section: Preliminariesmentioning
confidence: 99%
“…Recently, researchers have proposed a wide range of approaches to address the label scarcity issue for the slot filling task, such as zero-shot learning [27,43] and weak supervision [8,20]. Zero-shot learning methods [44,45] are capable of classifying instances of new unseen classes at inference time, they had not encountered during training, and thus do not require training data for each new domain. Weak supervision approaches eliminate the need for manually labeled data by automatically generating noisy labels using a heuristic labeling function, powered by almost freely-available external knowledge bases, off-the-shelf models (e.g., NER models), or their combination.…”
mentioning
confidence: 99%