2022
DOI: 10.1109/access.2022.3140820
|View full text |Cite
|
Sign up to set email alerts
|

Neural Network With Hierarchical Attention Mechanism for Contextual Topic Dialogue Generation

Abstract: The encoder-decoder model has achieved remarkable results in natural language generation. However, in the dialogue generation work, we often ignore the influence of the dialogue context information and topic information in the generation, resulting in the generated replies not close to the context or lack of topic information leads to general responses .In this work, we study the generation of multi-turn dialogues based on a large corpus and take advantage of the context information and topic information of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…Mostly, it can separate the significant m features. Hence, it can be efficiently decrease the parameters of the Then, m is represented as the given Equation (28).…”
Section: Global Aspect Based Feature Extraction Modulementioning
confidence: 99%
“…Mostly, it can separate the significant m features. Hence, it can be efficiently decrease the parameters of the Then, m is represented as the given Equation (28).…”
Section: Global Aspect Based Feature Extraction Modulementioning
confidence: 99%
“…As generated responses tend to be generic, several methods are proposed to improve the diversity of the generated responses, e.g. by incorporating topic information [23] or using latent variable models and gate mechanism [24]. Li et al [25] train an encoder-decoder model in a bidirectional method by adding a backward reasoning step to avoid generic and dull responses.…”
Section: Related Workmentioning
confidence: 99%
“…In this study, the contextual sentence encoding result was used as the input of the BERT model, and the Encoder part of the Transformer [ 6 ] model as the model framework. The model structure is shown in Figure 3 .…”
Section: Coding Model For Multi-turn Dialoguementioning
confidence: 99%
“…The retrieval-based method mainly completes the reply matching of each dialogue turn, a discriminative model. The neural generation-based models mainly include Sequence-to-Sequence (Seq2Seq) Models [ 3 , 4 , 5 ], Dialogue Context [ 6 ], Response Diversity [ 7 , 8 ], Topic and Personality [ 9 , 10 ], Outside Knowledge Base. Dialogue context, Response Diversity, Topic or Personality, and other methods adopt multi-classification of contextual dialogues and then select and integrate the best alternative answers to return.…”
Section: Introductionmentioning
confidence: 99%