Text-based person re-identification enables the retrieval of specific pedestrians from a large image library using textual descriptions, effectively addressing the issue of missing pedestrian images. The main challenges in this task are to learn discriminative image–text features and achieve accurate cross-modal matching. Despite the potential of leveraging semantic information from pedestrian attributes, current methods have not yet fully harnessed this resource. To this end, we introduce a novel Text-based Dual-branch Person Re-identification Algorithm based on the Deep Attribute Information Mining (DAIM) network. Our approach employs a Masked Language Modeling (MLM) module to learn cross-modal attribute alignments through mask language modeling, and an Implicit Relational Prompt (IRP) module to extract relational cues between pedestrian attributes using tailored prompt templates. Furthermore, drawing inspiration from feature fusion techniques, we developed a Symmetry Semantic Feature Fusion (SSF) module that utilizes symmetric relationships between attributes to enhance the integration of information from different modes, aiming to capture comprehensive features and facilitate efficient cross-modal interactions. We evaluated our method using three benchmark datasets, CUHK-PEDES, ICFG-PEDES, and RSTPReid, and the results demonstrated Rank-1 accuracy rates of 78.17%, 69.47%, and 68.30%, respectively. These results indicate a significant enhancement in pedestrian retrieval accuracy, thereby validating the efficacy of our proposed approach.