The success of large-scale pre-trained language models in the Natural Language Processing (NLP) domain has encouraged their adoption in genomics and single-cell biology. Developing pre-trained models using the rapidly growing single-cell transcriptomic data helps to unravel the intricate language of cells. However, current single-cell pre-trained models primarily focus on learning gene and cell representations from extensive gene expression data, failing to fully comprehend the biological significance of the gene expression patterns and cell types they identify, which leads to limited interpretability and transferability. We propose scKEPLM, a knowledge-enhanced single-cell pre-training language model integrates a biology knowledge graph into the single-cell transcriptome pre-training process. scKEPLM covers over 41 million single-cell RNA sequences and 8.9 million gene relations. Through parallel pre-training of single-cell transcriptome sequences and genetic knowledge, combined with a Gaussian cross-attention mechanism, scKEPLM precisely aligns cell semantics with genetic information, to learn more accurate and comprehensive representations of single-cell transcriptomes. The introduction of knowledge enhancement has improved the identification of important genes in cells by scKEPLM, and greatly enriched the understanding of cell function and disease mechanism. The scKEPLM model has achieved state-of-the-art performance in more than 12 downstream tasks, including gene annotation, cell annotation, and drug response prediction, demonstrating strong generalization and transferability. Further exploration of the model’s interpretability demonstrates its adaptability to variations in gene expression patterns within cells under various physiological or pathological conditions.