2022
DOI: 10.21203/rs.3.rs-920254/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Bilateral Personalized Dialogue Generation with Contrastive Learning

Abstract: Generating personalized responses is one of the major challenges in natural human-robot interaction. Current researches in this field mainly focus on generating responses consistent with the robot’s pre-assigned persona, while ignoring the user’s persona. Such responses may be inappropriate or even offensive, which may lead to the bad user experience. Therefore, we propose a Bilateral Personalized Dialogue Generation (BPDG) method for dyadic conversation, which integrates user and robot personas into dialogue … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 41 publications
0
1
0
Order By: Relevance
“…On the task of neural text generation, CL (Jiang et al 2022), a.k.a. unlikelihood training (Welleck et al 2020 or negative training (He and Glass 2020), alleviates undesirable properties of the generated output, e.g., repetition (Shirai et al 2020;Jiang et al 2022), maliciousness (He and Glass 2020), dullness (Li et al 2020b(Li et al , 2022, or inconsistency (Li et al 2020a). Moreover, Cao and Wang (2021) propose a sentence level contrastive learning method to reduce the hallucinations of text summarization model.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…On the task of neural text generation, CL (Jiang et al 2022), a.k.a. unlikelihood training (Welleck et al 2020 or negative training (He and Glass 2020), alleviates undesirable properties of the generated output, e.g., repetition (Shirai et al 2020;Jiang et al 2022), maliciousness (He and Glass 2020), dullness (Li et al 2020b(Li et al , 2022, or inconsistency (Li et al 2020a). Moreover, Cao and Wang (2021) propose a sentence level contrastive learning method to reduce the hallucinations of text summarization model.…”
Section: Contrastive Learningmentioning
confidence: 99%