Natural language processing is a science that integrates computer knowledge, mathematical knowledge and linguistic knowledge, while text classification and recognition is considered an important research area and direction of natural language processing. In the context of the big data era, how to effectively classify text information in the face of a sea of text-based data is the focus of current research. This paper describes the theoretical knowledge of text classification concepts, text representation methods and text classifiers. Firstly, the basic concepts of text classification and the classification process are introduced. Then the model structures of convolutional and recurrent neural networks and their variants are introduced, followed by the structure and implementation principles of two classical word embedding models, Word2vec and BERT.