Recently, researchers have extensively explored various methods for electronic medical record named entity recognition, including character-based, word-based, and hybrid methods. Nonetheless, these methods frequently disregard the semantic context of entities within electronic medical records, leading to the creation of subpar-quality clinical knowledge bases and obstructing the discovery of clinical knowledge. In response to these challenges, we propose a novel purpose-driven SoftLexicon-RoBERTa-BiLSTM-CRF (SLRBC) model for electronic medical records named entity recognition. SLRBC leverages the fusion of SoftLexicon and RoBERTa to incorporate the word lexicon information from electronic medical records into the character representations, enhancing the model’s semantic embedding representations. This purpose-driven approach helps achieve a more comprehensive representation and avoid common segmentation errors, consequently boosting the accuracy of entity recognition. Furthermore, we employ the classical BiLSTM-CRF framework to capture contextual information of entities more effectively. In order to assess the performance of SLRBC, a series of experiments on the public datasets of CCKS2018 and CCKS2019 were conducted. The experimental results demonstrate that SLRBC can efficiently extract entities from Chinese electronic medical records. The model attains F1 scores of 94.97% and 85.40% on CCKS2018 and CCKS2019, respectively, exhibiting outstanding performance in the extraction and utilization efficiency of clinical information.