Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1205
|View full text |Cite
|
Sign up to set email alerts
|

Personalizing Dialogue Agents: I have a dog, do you have pets too?

Abstract: Chit-chat models are known to have several problems: they lack specificity, do not display a consistent personality and are often not very captivating. In this work we present the task of making chit-chat more engaging by conditioning on profile information. We collect data and train models to (i) condition on their given profile information; and (ii) information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction. Since (ii) is initially unknown, our… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

7
1,215
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 974 publications
(1,224 citation statements)
references
References 27 publications
7
1,215
0
2
Order By: Relevance
“…Another related work (Raghu et al, 2018) applies hierarchical memory network for task oriented dialog problem. In this work, we compare our model with (Zhang et al, 2018) which uses a memoryaugmented sequence-to-sequence response generator grounded on the dialogue history and persona.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Another related work (Raghu et al, 2018) applies hierarchical memory network for task oriented dialog problem. In this work, we compare our model with (Zhang et al, 2018) which uses a memoryaugmented sequence-to-sequence response generator grounded on the dialogue history and persona.…”
Section: Related Workmentioning
confidence: 99%
“…Our proposed approach extends pointer-generator networks (See et al, 2017) by allowing the decoder to hierarchically attend and copy from external knowledge in addition to the dialogue context. We empirically show the effectiveness of the proposed model compared to several baselines including (Ghazvininejad et al, 2018;Zhang et al, 2018) through both automatic evaluation metrics and human evaluation on CONVAI2 dataset.…”
mentioning
confidence: 99%
“…We attribute the diminishing returns in increasing the number of hops to the limited size of the TrecQA and WikiQA datasets. Many previous works related to memory networks also use small number of memory hops Sukhbaatar et al, 2015;Miller et al, 2016;Zhang et al, 2018). Table 2 summarizes the performances of our proposed models and compares them to the baselines on the TrecQA and WikiQA datasets.…”
Section: Philosophymentioning
confidence: 99%
“…Deep learning techniques are used in both retrieval-based and generative models, but research seems to be moving into the generative direction [8]. A popular class of generative-based chatbot models are generative recurrent systems like seq2seq [9] which are rooted in language modelling and are able to produce syntactically coherent novel responses in conjunction with memoryaugmented networks [10].…”
Section: Introductionmentioning
confidence: 99%
“…Despite the continuous progress in deep learning, which make chatbots quite reliable and able to provide automatic and adaptive human-like conversation behaviour, they need improvement with respect to supporting complex behaviours [4,10]. The infinite number of topics and the fact that a certain amount of world knowledge is required to create reasonable responses makes the development of an open domain chatbot a hard problem [8].…”
Section: Introductionmentioning
confidence: 99%