2018
DOI: 10.1007/978-3-319-99495-6_7
|View full text |Cite
|
Sign up to set email alerts
|

When Less Is More: Using Less Context Information to Generate Better Utterances in Group Conversations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 26 publications
0
2
0
2
Order By: Relevance
“…Zhang et al (2018a), and tackle response generation, taking in previous utterances as input and the next utterance as output also specifically include the responding speaker and target addressee in the inputs and outputs). Zhang et al (2018a) report the BLEU-n (n based on n-grams, n = 1, 2, 3, 4) and METEOR (Banerjee and Lavie, 2005) scores (mentioning that the evaluation could be supplemented); report BLEU, ROUGE , noun mentions, and length of generated response, along with limited human evaluations for fluency, consistency, and informativeness; and report BLEU-n (n = 1, 2, 3, 4), METEOR, ROUGE-L (L for longest common subsequence), along with human evaluations for fluency, grammaticality, and rationality. Qiu et al (2020) focus on the dialogue thread structures which are utilized in , utilizing structured attention with Variational RNN, reporting the same automatic metrics BLEU-n (n = 1, 2, 3, 4), METEOR, ROUGE-L (L for longest common subsequence).…”
Section: Response Generationmentioning
confidence: 99%
“…Zhang et al (2018a), and tackle response generation, taking in previous utterances as input and the next utterance as output also specifically include the responding speaker and target addressee in the inputs and outputs). Zhang et al (2018a) report the BLEU-n (n based on n-grams, n = 1, 2, 3, 4) and METEOR (Banerjee and Lavie, 2005) scores (mentioning that the evaluation could be supplemented); report BLEU, ROUGE , noun mentions, and length of generated response, along with limited human evaluations for fluency, consistency, and informativeness; and report BLEU-n (n = 1, 2, 3, 4), METEOR, ROUGE-L (L for longest common subsequence), along with human evaluations for fluency, grammaticality, and rationality. Qiu et al (2020) focus on the dialogue thread structures which are utilized in , utilizing structured attention with Variational RNN, reporting the same automatic metrics BLEU-n (n = 1, 2, 3, 4), METEOR, ROUGE-L (L for longest common subsequence).…”
Section: Response Generationmentioning
confidence: 99%
“…Zhang et al (2018a), and tackle response generation, taking in previous utterances as input and the next utterance as output also specifically include the responding speaker and target addressee in the inputs and outputs). Zhang et al (2018a) report the BLEU-n (n based on n-grams, n = 1, 2, 3, 4) and METEOR (Banerjee and Lavie, 2005) scores (mentioning that the evaluation could be supplemented); report BLEU, ROUGE (Lin, 2004), noun mentions, and length of generated response, along with limited human evaluations for fluency, consistency, and informativeness; and report BLEU-n (n = 1, 2, 3, 4), METEOR, ROUGE-L (L for longest common subsequence), along with human evaluations for fluency, grammaticality, and rationality. Qiu et al (2020) focus on the dialogue thread structures which are utilized in , utilizing structured attention with Variational RNN, reporting the same automatic metrics BLEU-n (n = 1, 2, 3, 4), METEOR, ROUGE-L (L for longest common subsequence).…”
Section: Response Generationmentioning
confidence: 99%
“…如前文所述, 可以对复杂的多方对话历史进行结构分离, 进而更有针对性地进行回复. Zhang 等[31] 将多方对话的历史消息话语按照 "@" 回复关系, 组织成一个树形结构, 然后将树形结构分割成多个序 列组合, 进而使用说话对象所在分支的话语进行回复生成. 文献[31] 实验结果表明使用较少的上下文 信息来排除无关分支的话语, 可以产生更好的结果.…”
unclassified
“…Zhang 等[31] 将多方对话的历史消息话语按照 "@" 回复关系, 组织成一个树形结构, 然后将树形结构分割成多个序 列组合, 进而使用说话对象所在分支的话语进行回复生成. 文献[31] 实验结果表明使用较少的上下文 信息来排除无关分支的话语, 可以产生更好的结果. 如表 2[7, 10, 11, 18, 22, 23, 30∼33] 中 TreeSplit 结果所示, 该模型在个别指标上远优于对完整对话历史进行建模的模型, 但该方法需要数据集中有显式的说话对 象标签.…”
unclassified