Medical named entity recognition (NER) focuses on extracting and classifying key entities from medical texts. Through automated medical information extraction, NER can effectively improve the efficiency of electronic medical record analysis, medical literature retrieval, and intelligent medical question–answering systems, enabling doctors and researchers to obtain the required medical information more quickly and thereby helping to improve the accuracy of diagnosis and treatment decisions. The current methods have certain limitations in dealing with contextual dependencies and entity memory and fail to fully consider the contextual relevance and interactivity between entities. To address these issues, this paper proposes a Chinese medical named entity recognition model that combines contextual dependency perception and a new memory unit. The model combines the BERT pre-trained model with a new memory unit (GLMU) and a recall network (RMN). The GLMU can efficiently capture long-distance dependencies, while the RMN enhances multi-level semantic information processing. The model also incorporates fully connected layers (FC) and conditional random fields (CRF) to further optimize the performance of entity classification and sequence labeling. The experimental results show that the model achieved F1 values of 91.53% and 64.92% on the Chinese medical datasets MCSCSet and CMeEE, respectively, surpassing other related models and demonstrating significant advantages in the field of medical entity recognition.