Person re‐identification (re‐id), with the goal to recognize persons from images captured by nonoverlapping cameras, is a challenging topic in computer vision. It has been studied extensively in recent years, and the attention mechanism is widely applied in person re‐id. But many works mainly focus on extracting discriminative features from local saliency regions, while ignoring some potentially global information between whole‐body features and body‐part features. In this study, we first proposed two effective global information for extracting discriminative features: spatial topology information (STI) and channel affinity information (CAI). On this basis, we further propose a Multi‐information Fusion reinforced Global Attention (MIFGA) module which can effectively fuse a variety of information and utilize more comprehensive information to guide the learning of attention, so as to obtain pedestrian features that are conducive to clustering. Specifically, the proposed MIFGA module includes spatial attention (MIFGA‐S) and channel attention (MIFGA‐C). MIFGA‐S mainly utilizes local feature semantic information and STI to guide the learning of spatial attention. Furthermore, to mine the potential topology information in original feature maps, the self‐learning graph convolution network is proposed. MIFGA‐C fuses channel semantic information and CAI to guide the learning of channel attention. Extensive ablation studies demonstrate that our proposed MIFGA significantly enhances the baseline model and achieves a competitive performance compared with the state‐of‐the‐art person re‐id methods on standard data sets Market‐1501, DukeMTMC‐reID, and CUHK03.