Purpose: This study aims to enhance the efficiency and effectiveness of Chinese Clinical Named Entity Recognition by improving the Bert-BiLSTM-CRF model through the adoption of the RoBERTa pre-training model. Design/methodology/approach: A deep learning approach is employed, combining the RoBERTa pre-training model, Bi-directional Long Short-Term Memory (BiLSTM) network, and Conditional Random Field (CRF) model to form a Named Entity Recognition (NER) model. The model takes the pre-training model trained by the deep network model as input, mitigates the scarcity of annotated datasets, leverages the strong advantage of BiLSTM in learning the context information of words, and combines the CRF model to infer the ability of labels through global information. Findings: The RoBERTa-BiLSTM-CRF model has shown satisfactory results in the experiment. It enhances the reasoning ability between characters, allows the model to fully learn the feature information of the text, and improves the model performance to a certain extent. Originality/value: This paper proposes a RoBERTa medical named entity recognition model for the scarcity of annotated data in medical named entity recognition tasks and BERT’s inability to obtain word-level information. The model is not limited to medical entity recognition tasks and shows potential for other medical natural language processing tasks, considering data enhancement, data optimization, and domain transfer on the model to improve model performance and generalization capabilities.