General language model BERT pre-trained on cross-domain text corpus, BookCorpus and Wikipedia, achieves excellent performance on a couple of natural language processing tasks through the way of fine-tuning in the downstream tasks. But it still lacks of task-specific knowledge and domain-related knowledge for further improving the performance of BERT model and more detailed fine-tuning strategy analyses are necessary. To address these problem, a BERT-based text classification model BERT4TC is proposed via constructing auxiliary sentence to turn the classification task into a binary sentence-pair one, aiming to address the limited training data problem and task-awareness problem. The architecture and implementation details of BERT4TC are also presented, as well as a post-training approach for addressing the domain challenge of BERT. Finally, extensive experiments are conducted on seven public widelystudied datasets for analyzing the fine-tuning strategies from the perspectives of learning rate, sequence length and hidden state vector selection. After that, BERT4TC models with different auxiliary sentences and post-training objectives are compared and analyzed in depth. The experiment results show that BERT4TC with suitable auxiliary sentence significantly outperforms both typical feature-based methods and finetuning methods, and achieves new state-of-the-art performance on multi-class classification datasets. For binary sentiment classification datasets, our BERT4TC post-trained with suitable domain-related corpus also achieves better results compared with original BERT model. INDEX TERMS Natural language processing, text classification, bidirectional encoder representations from transformer, neural networks, language model.
The applications of data augmentation in natural language processing have been limited. In this paper, we propose a novel method named Hierarchical Data Augmentation (HDA) which applied for text classification. Firstly, inspired by the hierarchical structure of texts, as words form a sentence and sentences form a document, HDA implements a hierarchical data augmentation strategy by augmenting texts at word-level and sentence level respectively. Secondly, inspired by the cropping, a popular method of data augmentation in computer vision, at each augmenting level, HDA utilizes attention mechanism to distill (crop) important contents from texts hierarchically as summaries of texts. Specifically, we use a trained Hierarchical Attention Networks (HAN) model to obtain attention values of all documents in training sets at both levels respectively, which are further used to extract the most important part of words/sentences and generate new samples by concatenating them in order. Then we gain two levels of augmented datasets, WordSet and SentSet. Finally, extending training set with certain amount of HDA-generated samples and we evaluate models' performance with new training set. The results reveal HDA can generate massive and high-quality augmented samples at both levels, and models using these samples can obtain significant improvements. Compared with the existing methods, HDA enjoys the simplicity both on theory and implementation, and it can augment texts at two levels for the diversity of data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.