Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1170
|View full text |Cite
|
Sign up to set email alerts
|

What makes a good conversation? How controllable attributes affect human judgments

Abstract: A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, spec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
299
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 206 publications
(304 citation statements)
references
References 34 publications
4
299
0
1
Order By: Relevance
“…This module characterized the specificity of the response, which can guide the model to generate responses with different specificities according to different specificity requirements. See et al [19] proposed two controllable neural text generation methods: conditional training and weighted decoding, which controls four important low-level attributes (repetitiveness, specificity, relevance, and question-answer) that affect the quality of a conversation. Those attributes determine whether the response is simple or specific, whether the topic is continues or changes and whether the sentence is a question or answer.…”
Section: Specificity Levelmentioning
confidence: 99%
“…This module characterized the specificity of the response, which can guide the model to generate responses with different specificities according to different specificity requirements. See et al [19] proposed two controllable neural text generation methods: conditional training and weighted decoding, which controls four important low-level attributes (repetitiveness, specificity, relevance, and question-answer) that affect the quality of a conversation. Those attributes determine whether the response is simple or specific, whether the topic is continues or changes and whether the sentence is a question or answer.…”
Section: Specificity Levelmentioning
confidence: 99%
“…Recently, controlling specific aspects in text generation is drawing increasing attention (Hu et al, 2017;Logeswaran et al, 2018). In the context of dialogue generation, Wang et al (2017) propose steering response style and topic with human provided topic hints and fine-tuning on small scenting data; Zhang et al (2018a) propose learning to control specificity of responses; and very recently, See et al (2019) investigate how controllable attributes of responses affect human engagement with methods of conditional training and weighted decoding. Our work is different in that (1) rather than playing with a single variable like specificity or topics, our model simultaneously controls multiple variables and can take controlling with specificity or topics as special cases; and (2) we manage attribute expression in response generation with a principled approach rather than simple heuristics like in (See et al, 2019), and thus, our model can achieve better accuracy in terms of attribute expression in generated responses.…”
Section: Related Workmentioning
confidence: 99%
“…For local attributes such as response length 3 , the dynamic control 3 Local attributes refer to the attributes whose values are location sensitive during response generation. For example, strategy is more reasonable than static strategies such as feeding the embedding of attributes to the decoder like in conditional training in (See et al, 2019). This is because if the goal is to generate a response with 5 words and 2 words have been decoded, then the decoder needs to know that there are 3 words left rather than always memorizing that 5 words should be generated.…”
Section: Goal Tracking Memory Networkmentioning
confidence: 99%
“…It should be emphasized that the presented examples are only a small part of the possibilities offered by neural networks. Advanced research is underway on handling intelligent conversations with people [16], advanced analysis of written texts [17] or evaluation of voice data [18]. The dynamic developments in the field of neural networks allow for an assumption that in the near future complex cognitive models will be developed that will allow for intelligent behaviours, so far attributed exclusively to humans [12].…”
Section: Sieci Neuronowementioning
confidence: 99%
“…Należy podkreślić, że zaprezentowane przykłady stanowią jedynie niewielki wycinek możliwości zastosowania sieci neuronowych. Prowadzone są intensywne badania nad kierowaniem inteligentnych rozmów z ludźmi [16], zaawansowaną analizą tekstów pisanych [17] czy też oceną danych głosowych [18]. Dynamiczny rozwój dziedziny związanej z sieciami neuronowymi pozwala na stwierdzenie, że już w niedługim czasie powstaną złożone modele poznawcze umożliwiające prowadzenie zachowań inteligentnych, dotychczas przypisywanych wyłącznie ludziom [12].…”
Section: The Use Of Neural Network In Forensic Sexologyunclassified