Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1006
|View full text |Cite
|
Sign up to set email alerts
|

Constructing Interpretive Spatio-Temporal Features for Multi-Turn Responses Selection

Abstract: Response selection plays an important role in fully automated dialogue systems. Given the dialogue context, the goal of response selection is to identify the best-matched nextutterance (i.e., response) from multiple candidates. Despite the efforts of many previous useful models, this task remains challenging due to the huge semantic gap and also the large size of candidate set. To address these issues, we propose a Spatio-Temporal Matching network (STM) for response selection. In detail, soft alignment is firs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(26 citation statements)
references
References 17 publications
0
26
0
Order By: Relevance
“…Early studies focus on single-turn interactions that only considers the dialogue context as a single query by concatenating all previous turns (Yan et al, 2016;Lowe et al, 2015;Wang and Jiang, 2016). Later studies are more interested in learning multi-turn interactions so that the multiple turns in the context are all used as separate queries (Zhou et al, 2018;Lu et al, 2019;Tao et al, 2019). Recent studies show increasing interests in using pre-trained language models such as BERT (Devlin et al, 2019;Wu et al, 2020;Dario Bertero, 2020).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Early studies focus on single-turn interactions that only considers the dialogue context as a single query by concatenating all previous turns (Yan et al, 2016;Lowe et al, 2015;Wang and Jiang, 2016). Later studies are more interested in learning multi-turn interactions so that the multiple turns in the context are all used as separate queries (Zhou et al, 2018;Lu et al, 2019;Tao et al, 2019). Recent studies show increasing interests in using pre-trained language models such as BERT (Devlin et al, 2019;Wu et al, 2020;Dario Bertero, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Following the practice in prior works (Zhou et al, 2018;Yan et al, 2016;Wu et al, 2020;Dario Bertero, 2020;Lu et al, 2019;Tao et al, 2019), we train our model with a binary classification objective. We adopted a representation-matching-aggregation framework used in previous works (Zhou et al, 2018;.…”
Section: Model Designmentioning
confidence: 99%
“…To tackle this problem, different matching models are developed to measure the matching degree between a dialogue context and a response candidate (Wu et al, 2017;Lu et al, 2019;Gu et al, 2019). Despite their differences, most prior works train the model with data constructed by a simple heuristic.…”
Section: Introductionmentioning
confidence: 99%
“…A core module in such kind of conversation systems is response selection (Ritter et al, 2011;Hu et al, 2014;Wu et al, 2017;: Identifying the best response from a set of possible candidates given a dialogue context, i.e., conversation history. For the response selection problem, the trendy practice is to build neural matching models (Ji et al, 2014;Wang et al, 2015;Wu et al, 2017;Lu et al, 2019) for scoring the adequacy of individual response candidates in the dialogue context. Most prior works on this topic focus on fine-grained text encoding and better interactions between dialogue context and response candidates, typically via sophisticated and powerful matching networks (Wu et al, 2017;Lu et al, 2019;Gu et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…For the response selection problem, the trendy practice is to build neural matching models (Ji et al, 2014;Wang et al, 2015;Wu et al, 2017;Lu et al, 2019) for scoring the adequacy of individual response candidates in the dialogue context. Most prior works on this topic focus on fine-grained text encoding and better interactions between dialogue context and response candidates, typically via sophisticated and powerful matching networks (Wu et al, 2017;Lu et al, 2019;Gu et al, 2019). Despite their differences, in almost all these previous works, the matching models are trained with binary classification objective.…”
Section: Introductionmentioning
confidence: 99%