2022
DOI: 10.48550/arxiv.2210.03003
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Enhancing Code Classification by Mixup-Based Data Augmentation

Abstract: Recently, deep neural networks (DNNs) have been widely applied in programming language understanding. Generally, training a DNN model with competitive performance requires massive and high-quality labeled training data. However, collecting and labeling such data is time-consuming and laborintensive. To tackle this issue, data augmentation has been a popular solution, which delicately increases the training data size, e.g., adversarial example generation. However, few works focus on employing it for programming… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Mixup with label preserving methods is another augmentation approach [41] which extends Mixup by incorporating label preserving. It applies modifications to the input features while preserving the original labels.…”
Section: Speech Signal Modificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Mixup with label preserving methods is another augmentation approach [41] which extends Mixup by incorporating label preserving. It applies modifications to the input features while preserving the original labels.…”
Section: Speech Signal Modificationmentioning
confidence: 99%
“…SamplePairing [40] Mixup with label preserving methods [41] Spectrogram Modification SpecAugment [42] Warping the features, masking blocks of frequency channels, and masking blocks of time steps VTLP [43] Random linear distortion by frequency…”
Section: Speech Signal Modificationmentioning
confidence: 99%