The addition of objects to the original image in splice forgery results in a change in the semantics of the original image, and distribution of these spliced images may bring negative impacts. To solve this issue, many forgery detection methods based on convolutional neural networks are presented. However, they tend to extract deep features, but ignore the importance of shallow semantics. In addition, the complexity of the forgery localization task leads to insufficient accuracy in detecting smaller forged regions. Based on the above problems, an end-to-end image splicing localization network based on multi-scale features and a residual refinement module (RRM) is proposed in this work. Our approach can be roughly divided into two modules: the detection module and the RRM. First, shallow and deep features are extracted from a backbone network, and deep features are then processed by the deeper atrous spatial pyramid pooling (ASPP) module to extract multi-scale features. The deeper ASPP module uses a smaller dilation rate and light convolution, which is more suitable for detecting complex counterfeit images. Second, the shallow features are fused with the multi-scale features to complement the shallow semantic information, such as texture and edges, which further improve the robustness of the model. Finally, the detection network generates coarse prediction maps that are fed to the RRM, and the RRM optimizes these masks by smoothing the boundaries, filling small gaps, and enhancing the edge details, which improves the pixel-level segmentation results for forgery detection. Extensive experimental results on several public datasets show that this method outperforms other stateof-the-art methods in image forgery localization.