Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation 2019
DOI: 10.18653/v1/w19-2301
|View full text |Cite
|
Sign up to set email alerts
|

An Adversarial Learning Framework For A Persona-Based Multi-Turn Dialogue Model

Abstract: In this paper, we extend the persona-based sequence-to-sequence (Seq2Seq) neural network conversation model to a multi-turn dialogue scenario by modifying the state-ofthe-art hredGAN architecture to simultaneously capture utterance attributes such as speaker identity, dialogue topic, speaker sentiments and so on. The proposed system, phredGAN has a persona-based HRED generator (PHRED) and a conditional discriminator. We also explore two approaches to accomplish the conditional discriminator: (1) phredGAN a , a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 16 publications
(15 citation statements)
references
References 11 publications
0
14
0
Order By: Relevance
“…More recently, under the topic of dialogue personalizing, Zemlyanskiy and Sha (2018) proposed a post-processing method to re-rank candidates generated by beam search, while Olabiyi et al (2019) employed adversarial approaches to solve the consistency problem on interlocutors' names. Madotto et al (2019) applied meta-learning to quickly adapt to new speakers, and Tigunova et al (2019) extracted user attributes from daily dialogues.…”
Section: Related Workmentioning
confidence: 99%
“…More recently, under the topic of dialogue personalizing, Zemlyanskiy and Sha (2018) proposed a post-processing method to re-rank candidates generated by beam search, while Olabiyi et al (2019) employed adversarial approaches to solve the consistency problem on interlocutors' names. Madotto et al (2019) applied meta-learning to quickly adapt to new speakers, and Tigunova et al (2019) extracted user attributes from daily dialogues.…”
Section: Related Workmentioning
confidence: 99%
“…There are various kinds of background knowledge that can be grounded in dialogue modeling, including commonsense items from knowledge graphs to strengthen reasoning ability [Zhang et al, 2021a], persona-based attributes such as speaker identity, dialogue topic, speaker sentiments, to enrich the dialogue context [Olabiyi et al, 2019], scenario information to provide the dialogue background [Ouyang et al, 2020], etc.…”
Section: Background Knowledge Groundingmentioning
confidence: 99%
“…This attribute should be very beneficial to dialogue modeling, especially in multi-turn scenarios. Most of the existing neural dialogue response generation models are based on recurrent neural networks (Sutskever et al, 2014;Vinyals and Le, 2015;Li et al, 2016a;Serban et al, 2016;Xing et al, 2017;Serban et al, 2017b,a;Li et al, 2016b;Zhang et al, 2018a;Olabiyi et al, 2018Olabiyi et al, , 2019a). These models have yielded promising results by generating mostly coherent responses given the dialogue context.…”
Section: Introductionmentioning
confidence: 99%