Building a dense correspondence between two images is a fundamental vision problem. Most existing methods use local features, but global features cannot be ignored. Local features are often not enough to disambiguate similar regions without global features. Computing relevant features between images requires structural relationship and the importance of local features. For that, We propose novel multi-scale attention and structural relation graph (MASRG) for local feature matching. The MASRG adopts an overall architecture that first builds coarse-level matches on a coarse feature map and then refines fine matches on a fine-level feature map. We propose a structural relation graph module and a multi-scale attention module. We introduce global context information into the overall architecture. Using global information to separately assist in learning the structural information between local descriptors, the features of different receptive fields, and the importance of modeling single local information, a limited number of possible matches can be obtained with high confidence. Finally, the matching relationship is predicted. In this way, the network significantly improves the matching reliability and localization accuracy. Our proposed method has 5.6%, 6.7%, and 6.3% performance increases over the baseline method(See I) under different conditions in the HPatches. Extensive experiments on three large-scale datasets (i.e., HPatches, InLoc, and Aachen Day-Night v1.1) demonstrate that our proposed MASRG method is superior to state-of-the-art local feature matching approaches.