2019
DOI: 10.48550/arxiv.1911.10666
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Who did They Respond to? Conversation Structure Modeling using Masked Hierarchical Transformer

Abstract: Conversation structure is useful for both understanding the nature of conversation dynamics and for providing features for many downstream applications such as summarization of conversations. In this work, we define the problem of conversation structure modeling as identifying the parent utterance(s) to which each utterance in the conversation responds to. Previous work usually took a pair of utterances to decide whether one utterance is the parent of the other. We believe the entire ancestral history is a ver… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…Kummerfeld et al (2019) uses feed-forward networks with averaged pre-trained word embedding and many hand-engineered features. Tan et al (2019) used an utterance-level LSTM network, while Zhu et al (2019) used a masked transformer to get a contextaware utterance representation considering utterances in the same conversation.…”
Section: Conversation Disentanglementmentioning
confidence: 99%
“…Kummerfeld et al (2019) uses feed-forward networks with averaged pre-trained word embedding and many hand-engineered features. Tan et al (2019) used an utterance-level LSTM network, while Zhu et al (2019) used a masked transformer to get a contextaware utterance representation considering utterances in the same conversation.…”
Section: Conversation Disentanglementmentioning
confidence: 99%