In real‐world scenarios, pedestrian images often suffer from occlusion, where certain body features become invisible, making it challenging for existing methods to accurately identify pedestrians with the same ID. Traditional approaches typically focus on matching only the visible body parts, which can lead to misalignment when the occlusion patterns vary. To address this issue and alleviate misalignment in occluded pedestrian images, the authors propose a novel framework called body topology information generation and matching. The framework consists of two main modules: the body topology information generation module and the body topology information matching module. The body topology information generation module employs an adaptive detection mechanism and capsule generative adversarial network to restore a holistic pedestrian image while preserving the body topology information. The body topology information matching module leverages the restored holistic image from body topology information generation to overcome spatial misalignment and utilises cosine distance as the similarity measure for matching. By combining the body topology information generation and body topology information matching modules, the authors achieve consistency in the body topology information features of pedestrian images, ranging from restoration to retrieval. Extensive experiments are conducted on both holistic person re‐identification datasets (Market‐1501, DukeMTMC‐ReID) and occluded person re‐identification datasets (Occluded‐DukeMTMC, Occluded‐ReID). The results demonstrate the superior performance of the authors proposed model, and visualisations of the generation and matching modules are provided to illustrate their effectiveness. Furthermore, an ablation study is conducted to validate the contributions of the proposed framework.