Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.52
|View full text |Cite
|
Sign up to set email alerts
|

CDL: Curriculum Dual Learning for Emotion-Controllable Response Generation

Abstract: Emotion-controllable response generation is an attractive and valuable task that aims to make open-domain conversations more empathetic and engaging. Existing methods mainly enhance the emotion expression by adding regularization terms to standard cross-entropy loss and thus influence the training process. However, due to the lack of further consideration of content consistency, the common problem of response generation tasks, safe response, is intensified. Besides, query emotions that can help model the relat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

2
8

Authors

Journals

citations
Cited by 66 publications
(41 citation statements)
references
References 40 publications
0
41
0
Order By: Relevance
“…For I, an action is the identification from sentence to relation, a state is denoted by the representation of input event pair and its sentence, a policy is defined by the parameters of identifier. Inspired by Shen and Feng (2020), we utilize a probability distribution over actions given states to represent the policys, i.e., the probability distribution of the generation of G and identification of I. As aforementioned, we introduce two rewards, causality (R c ) and semantic alignment (R s ) rewards, which encourage G to generate taskrelated sentences with the feedback from identifier, while further optimize I with the feedback from generator.…”
Section: Architecture Of Learnable Dual Augmentationmentioning
confidence: 99%
“…For I, an action is the identification from sentence to relation, a state is denoted by the representation of input event pair and its sentence, a policy is defined by the parameters of identifier. Inspired by Shen and Feng (2020), we utilize a probability distribution over actions given states to represent the policys, i.e., the probability distribution of the generation of G and identification of I. As aforementioned, we introduce two rewards, causality (R c ) and semantic alignment (R s ) rewards, which encourage G to generate taskrelated sentences with the feedback from identifier, while further optimize I with the feedback from generator.…”
Section: Architecture Of Learnable Dual Augmentationmentioning
confidence: 99%
“…Recently, dialogue systems have gained more attention in both research community (Vougiouklis et al, 2016;Liu et al, 2018;Shen et al, 2019;Shen and Feng, 2020) and industry (Xu et al, 2020;, because of its practicality in the real application, such as chatbot and customer services Shen et al, 2021;Zhang et al, 2020). With external knowledge sources, dialogue systems can generate more specific and informative response, which has great potential to resolve the common response problem (Zhang et al, 2018b;Ren et al, 2020 knowledge-grounded dialogue generation task into two sub-problems: knowledge selection and response selection.…”
Section: Related Workmentioning
confidence: 99%
“…In our model, we use an MLP to predict question types during inference, which is different from the conditional training (CT) methods (Li et al, 2016b;Shen and Feng, 2020) that provide the controllable feature, i.e., question types, in advance for inference. Therefore, we do not consider CT-based models as comparable ones.…”
Section: Baselinesmentioning
confidence: 99%