Proceedings of the 28th International Conference on Computational Linguistics: Industry Track 2020
DOI: 10.18653/v1/2020.coling-industry.2
|View full text |Cite
|
Sign up to set email alerts
|

Data-Efficient Paraphrase Generation to Bootstrap Intent Classification and Slot Labeling for New Features in Task-Oriented Dialog Systems

Abstract: Recent progress through advanced neural models pushed the performance of task-oriented dialog systems to almost perfect accuracy on existing benchmark datasets for intent classification and slot labeling. However, in evolving real-world dialog systems, where new functionality is regularly added, a major additional challenge is the lack of annotated training data for such new functionality, as the necessary data collection efforts are laborious and time-consuming. A potential solution to reduce the effort is to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
15
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 13 publications
(15 citation statements)
references
References 20 publications
0
15
0
Order By: Relevance
“…Our approach is focused towards generating utterances in the dialog domain that can generate utterances from a sequence of slots conditioned on both intent and language. Jolly et al (2020) showed that an interpretationto-text model can be used with shuffling-based sampling techniques to generate diverse and novel paraphrases from small amounts of seed data, that improve accuracy when augmenting to the existing training data. Our approach is different as our model can generate the slot annotations along with the the utterance, which are necessary for the slot labeling task.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Our approach is focused towards generating utterances in the dialog domain that can generate utterances from a sequence of slots conditioned on both intent and language. Jolly et al (2020) showed that an interpretationto-text model can be used with shuffling-based sampling techniques to generate diverse and novel paraphrases from small amounts of seed data, that improve accuracy when augmenting to the existing training data. Our approach is different as our model can generate the slot annotations along with the the utterance, which are necessary for the slot labeling task.…”
Section: Related Workmentioning
confidence: 99%
“…However, labeled examples for the new feature are typically limited to a small set of seed examples, as the collection of more annotations would make feature expansion costly and slow. As a possible solution, previous work explored the automatic generation of paraphrases to augment the seed data (Malandrakis et al, 2019;Cho et al, 2019;Jolly et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Intent classification and slot labeling are two fundamental tasks in spoken language understanding, dating back to early 90's (Price, 1990). With the rise of task-oriented personal assistants, the two tasks got more attention and progress has been made by applying various deep learning techniques (Abujabal and Gaspers, 2019;Goo et al, 2018; Jolly et al, 2020;Mesnil et al, 2013;Zhang and Wang, 2016). While we focus on resolving annotation conflicts for NLU with linear labeling i.e., intent and slot labels, our approach can be still used for other more complex tree-based labeling e.g., labeling dependency parses or ontology trees (Chen and Manning, 2014), with the minor change of replacing the task-specific neural LSTM-based classification model.…”
Section: Related Workmentioning
confidence: 99%
“…PlayMusic), and (2) A slot labeling (SL) model, which classifies tokens into slot types, out of a predefined set (e.g. SongName) (Goo et al, 2018;Jolly et al, 2020). An example utterance is shown in Figure 1, with two conflicting annotations.…”
Section: Introductionmentioning
confidence: 99%