2023
DOI: 10.1109/tcss.2022.3172699
|View full text |Cite
|
Sign up to set email alerts
|

Information-Enhanced Hierarchical Self-Attention Network for Multiturn Dialog Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…TA-Seq2Seq [14] focuses on transforming the conversation topic to assist in response prediction. Combining multiple levels of dialogue context can achieve better context modeling and also yields notable effectiveness in response generation tasks, such as HiSA-GDS, HSAN, IEHSA, HDID and HHKS [15,[24][25][26][27]. For example, HiSA-GDS utilizes the word-level and sentence-level history successively to interact with responses.…”
Section: His2res Methodsmentioning
confidence: 99%
“…TA-Seq2Seq [14] focuses on transforming the conversation topic to assist in response prediction. Combining multiple levels of dialogue context can achieve better context modeling and also yields notable effectiveness in response generation tasks, such as HiSA-GDS, HSAN, IEHSA, HDID and HHKS [15,[24][25][26][27]. For example, HiSA-GDS utilizes the word-level and sentence-level history successively to interact with responses.…”
Section: His2res Methodsmentioning
confidence: 99%
“…In trajectory modeling, transitions between sets of different locations are characterized. If semantic information is available, supervised techniques such as graph neural networks [11] or SVM [12] can be used. In the context-independent case, unsupervised techniques such as clustering [13] or frequent item mining [14] approaches can be used for this purpose.…”
Section: Work Related To Probability Model-based Processing Of Locati...mentioning
confidence: 99%
“…[28] reads the word embedding sequence through a sliding convolutional unit and uses max-pooling to obtain the sentence-level history representation, interacting the word and sentence-level representation with candidate responses to select high probability responses. [29] propose a multi-level transformer-based model that divides multiple words/sentences into local units of fixed size for sequences of words/sentences respectively, and uses RNN to encode units at two levels, to enhance the ability to capture relevant context. In addition to the above hierarchy division methods, the use of knowledge labels can also help predict knowledge [2].…”
Section: Semantic Unit Granularitymentioning
confidence: 99%