Remote sensing sensor platforms are typically located at a significant distance from the ground, ranging from several hundred meters to hundreds of kilometers. This means that, compared to natural images, remote sensing images (RSI) have larger coverage areas and more complex information. The larger size and data volume of RSI presents challenges for computer vision matching algorithms (MAs), making it difficult to apply them directly to RSI matching. Moreover, a matching framework for multi-source remote sensing images capable of large-scale processing by integrating multiple MAs with the entire RSI as input is presently lacking. This study proposes a Tie Points (TPs) Matching Framework of Multi-Source Remote Sensing Images (MSRSI-TPMF) based on the geometric and radiation characteristics of RSI. First, RSI is divided into different grids and undergo local geometry correction. Next, matching between slice images are performed by MAs. Finally, TPs are generated by mapping matched points in multiple slice images to the whole RSI using a geometric processing model. Six representative MAs including artificial feature matching algorithms and deep learning algorithms are integrated into the framework to match TPs from different RSI. Results demonstrate the extraction of TPs for multi-source RSI, validating the framework's efficacy. In addition, a large-scale TPs matching test for deep learning matching algorithm is performed by using 13 synthetic aperture radar (SAR) images (10m resolution) with TPs root mean square error (RMSE) of 0.368 pixels, further confirming the framework's reliability. Our framework is available at https://github.com/TobyChengV1/MSRSI-TPMF.