2020
DOI: 10.1155/2020/8516216
|View full text |Cite
|
Sign up to set email alerts
|

Joint Character-Level Convolutional and Generative Adversarial Networks for Text Classification

Abstract: With the continuous renewal of text classification rules, text classifiers need more powerful generalization ability to process the datasets with new text categories or small training samples. In this paper, we propose a text classification framework under insufficient training sample conditions. In the framework, we first quantify the texts by a character-level convolutional neural network and input the textual features into an adversarial network and a classifier, respectively. Then, we use the real textual … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
258
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 186 publications
(261 citation statements)
references
References 28 publications
1
258
0
2
Order By: Relevance
“…Kim [9] used CNN to extract sentence features and achieved good results in sentence classification. Zhang et al [24] consider text as a kind of raw signal at character level and applied CNN to extract text feature, achieving promising results. In addition, text is sequence data, while RNN, LSTM and their variants are often used to process this type of data.…”
Section: B Classical Neural Network Based Methodsmentioning
confidence: 99%
“…Kim [9] used CNN to extract sentence features and achieved good results in sentence classification. Zhang et al [24] consider text as a kind of raw signal at character level and applied CNN to extract text feature, achieving promising results. In addition, text is sequence data, while RNN, LSTM and their variants are often used to process this type of data.…”
Section: B Classical Neural Network Based Methodsmentioning
confidence: 99%
“…We make use of two ratings prediction datasets with classes in the range 1-5 and, similarly to Zhang et al (2015), reformulate the task as a binary sentiment classification task by merging the provided labels; 1-2: negative and 3-4: positive. We focus on similar, within-task (i.e.…”
Section: Datasets and Domainsmentioning
confidence: 99%
“…Compared to the SRR-LSTM approach developed by Zhou et al (2021b), the training time for 1D-CNN model development is less than one fourth of the time required by the training of LSTM models as part of the SRR-LSTM approach. Although the LSTM models developed in this study are only one-third the size of the 1D-CNN models, the 1D-CNN models are trained more quickly because the computation for convolutional layers in 1D-CNN models is highly parallelized while the computation for LSTM layers is mostly sequential (Bai et al, 2018;Zhang et al, 2015). The input time series to these models are long sequences of 384 elements which adds computational burden to the sequential processes.…”
Section: Computational Efficiencymentioning
confidence: 99%