Text classification is a crucial task in data mining and artificial intelligence. In recent years, deep learning-based text classification methods have made great development. The deep learning methods supervise model training by representing a label as a one-hot vector. However, the one-hot label representation cannot adequately reflect the relation between an instance and the labels, as labels are often not completely independent, and the instance may be associated with multiple labels in practice. Simply representing the labels as one-hot vectors leads to overconfidence in the model, making it difficult to distinguish some label confusions. In this paper, we propose a simulated label distribution method based on concepts (SLDC) to tackle this problem. This method captures the overlap between the labels by computing the similarity between an instance and the labels and generates a new simulated label distribution for assisting model training. In particular, we incorporate conceptual information from the knowledge base into the representation of instances and labels to address the surface mismatching problem when instances and labels are compared for similarity. Moreover, to fully use the simulated label distribution and the original label vector, we set up a multi-loss function to supervise the training process. Expensive experiments demonstrate the effectiveness of SLDC on five complex text classification datasets. Further experiments also verify that SLDC is especially helpful for confused datasets.
Training a deep-learning text classification model usually requires a large amount of labeled data, yet labeling data are usually labor-intensive and time-consuming. Few-shot text classification focuses on predicting unknown samples using only a few labeled samples. Recently, metric-based meta-learning methods have achieved promising results in few-shot text classification. They use episodic training in labeled samples to enhance the model’s generalization ability. However, existing models only focus on learning from a few labeled samples but neglect to learn from a large number of unlabeled samples. In this paper, we exploit the knowledge learned by the model in unlabeled samples to improve the generalization performance of the meta-network. Specifically, we introduce a novel knowledge distillation method that expands and enriches the meta-learning representation with self-supervised information. Meanwhile, we design a graph aggregation method that efficiently interacts the query set information with the support set information in each task and outputs a more discriminative representation. We conducted experiments on three public few-shot text classification datasets. The experimental results show that our model performs better than the state-of-the-art models in 5-way 1-shot and 5-way 5-shot cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.