2022
DOI: 10.48550/arxiv.2206.05790
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Data Augmentation for Intent Classification

Abstract: Training accurate intent classifiers requires labeled data, which can be costly to obtain. Data augmentation methods may ameliorate this issue, but the quality of the generated data varies significantly across techniques. We study the process of systematically producing pseudo-labeled data given a small seed set using a wide variety of data augmentation techniques, including mixing methods together. We find that while certain methods dramatically improve qualitative and quantitative performance, other methods … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…However, OOD cases are by definition areas the network has not seen, leading to poor performance. Data augmentation and other robustness methods may serve as a strong tool to cover the unknown space by maximizing the diversity of the examples (Ng et al, 2020;Chen and Yin, 2022).…”
Section: Resultsmentioning
confidence: 99%
“…However, OOD cases are by definition areas the network has not seen, leading to poor performance. Data augmentation and other robustness methods may serve as a strong tool to cover the unknown space by maximizing the diversity of the examples (Ng et al, 2020;Chen and Yin, 2022).…”
Section: Resultsmentioning
confidence: 99%
“…Alternatively, and perform data augmentation with prompting, but their prompts are not compositional since their task setups are focused on single-aspect class prediction. Data Augmentation is a common technique in NLP for counteracting the limited data available with few-shot learning (Feng et al, 2021;Chen and Yin, 2022). Flavors of data augmentation include surface form alteration (Wei and Zou, 2019), latent perturbation (Sennrich et al, 2016;Fabius et al, 2015) or auxiliary supervision (Chen and Yu, 2021).…”
Section: Topv2mentioning
confidence: 99%