Interspeech 2018 2018
DOI: 10.21437/interspeech.2018-2174
|View full text |Cite
|
Sign up to set email alerts
|

Neural MultiVoice Models for Expressing Novel Personalities in Dialog

Abstract: Natural language generators for task-oriented dialog should be able to vary the style of the output utterance while still effectively realizing the system dialog actions and their associated semantics. While the use of neural generation for training the response generation component of conversational agents promises to simplify the process of producing high quality responses in new domains, to our knowledge, there has been very little investigation of neural generators for task-oriented dialog that can vary th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

4
4

Authors

Journals

citations
Cited by 10 publications
(13 citation statements)
references
References 27 publications
0
13
0
Order By: Relevance
“…We first discuss the baselines from previous work on the same benchmarks. Interestingly, for Personality, our NOCON model gets a huge performance improvement of more than 11 points in BLEU (27.74 → 38.45) over results reported by Oraby et al (2018a). We note that while the underlying architecture behind our experiments is similar to the baseline described by Oraby et al (2018a), we experiment with different parameters and attention mechanisms.…”
Section: Semantic Qualitymentioning
confidence: 82%
See 1 more Smart Citation
“…We first discuss the baselines from previous work on the same benchmarks. Interestingly, for Personality, our NOCON model gets a huge performance improvement of more than 11 points in BLEU (27.74 → 38.45) over results reported by Oraby et al (2018a). We note that while the underlying architecture behind our experiments is similar to the baseline described by Oraby et al (2018a), we experiment with different parameters and attention mechanisms.…”
Section: Semantic Qualitymentioning
confidence: 82%
“…Interestingly, for Personality, our NOCON model gets a huge performance improvement of more than 11 points in BLEU (27.74 → 38.45) over results reported by Oraby et al (2018a). We note that while the underlying architecture behind our experiments is similar to the baseline described by Oraby et al (2018a), we experiment with different parameters and attention mechanisms. and Oraby et al (2018b) also use an LSTM encoderdecoder model with attention, but they both implement their models using the TGen 7 (Dušek and Jurcicek, 2016) framework with its default model architecture.…”
Section: Semantic Qualitymentioning
confidence: 82%
“…There are many other possibilities with both the WikiData knowledge graph RGs and with corpora such as Viggo for tuning and testing regimes that we have not yet experimented with or fully evaluated. We also plan to carry out future experiments on a number of other challenging problems for NLG [36,28,29,16]. 10…”
Section: Discussionmentioning
confidence: 99%
“…We find that while the vanilla model makes the fewest semantic errors, the outputs loses any distinctive stylistic variation. With Model Context, however, we are able to achieve our goal: we can both produce stylistically varied outputs that correlate with the required personalities, and preserve semantic fidelity with notably few errors [Oraby et al, 2018b, Oraby et al, 2018a.…”
Section: Extravertmentioning
confidence: 99%