Proceedings of the 28th International Conference on Computational Linguistics: Industry Track 2020
DOI: 10.18653/v1/2020.coling-industry.10
|View full text |Cite
|
Sign up to set email alerts
|

Delexicalized Paraphrase Generation

Abstract: We present a neural model for paraphrasing and train it to generate delexicalized sentences. We achieve this by creating training data in which each input is paired with a number of reference paraphrases. These sets of reference paraphrases represent a weak type of semantic equivalence based on annotated slots and intents. To understand semantics from different types of slots, other than anonymizing slots, we apply convolutional neural networks (CNN) prior to pooling on slot values and use pointers to locate s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 12 publications
0
1
0
Order By: Relevance
“…Depending on the complexity of the changes that occur, domain adaptation of data-driven systems can be approached in two directions: (i) improving the model robustness, and (ii) adapting the training dialogues to the new situation. While the first direction has been largely explored through several techniques, including transfer learning (Louvan and Magnini, 2019), and zero-shot learning though schema-guided models (Wu et al, 2019a;Kim et al, 2020;Zhang et al, 2019;Heck et al, 2020;, and delexicalization (Henderson et al, 2014a,b;Yu et al, 2020), in this paper we take the second, less investigated, perspective, focusing on the relation between training dialogues and domain knowledge.…”
Section: Introductionmentioning
confidence: 99%
“…Depending on the complexity of the changes that occur, domain adaptation of data-driven systems can be approached in two directions: (i) improving the model robustness, and (ii) adapting the training dialogues to the new situation. While the first direction has been largely explored through several techniques, including transfer learning (Louvan and Magnini, 2019), and zero-shot learning though schema-guided models (Wu et al, 2019a;Kim et al, 2020;Zhang et al, 2019;Heck et al, 2020;, and delexicalization (Henderson et al, 2014a,b;Yu et al, 2020), in this paper we take the second, less investigated, perspective, focusing on the relation between training dialogues and domain knowledge.…”
Section: Introductionmentioning
confidence: 99%