In the domain of synthetic aperture radar (SAR) image processing, a prevalent issue persists wherein research predominantly focuses on single-task learning, often neglecting the concurrent impact of speckle noise and low resolution on SAR images. Currently, there are two main processing strategies. The first strategy involves conducting speckle reduction and super-resolution processing step by step. The second strategy involves performing speckle reduction as an auxiliary step, with a focus on enhancing the primary task of super-resolution processing. However, both of these strategies exhibit clear deficiencies. Nevertheless, both tasks jointly focus on two key aspects, enhancing SAR quality and restoring details. The fusion of these tasks can effectively leverage their task correlation, leading to a significant improvement in processing effectiveness. Additionally, multi-temporal SAR images covering imaging information from different time periods exhibit high correlation, providing deep learning models with a more diverse feature expression space, greatly enhancing the model’s ability to address complex issues. Therefore, this study proposes a deep learning network for integrated speckle reduction and super-resolution in multi-temporal SAR (ISSMSAR). The network aims to reduce speckle in multi-temporal SAR while significantly improving the image resolution. Specifically, it consists of two subnetworks, each taking the SAR image at time 1 and the SAR image at time 2 as inputs. Each subnetwork includes a primary feature extraction block (PFE), a high-level feature extraction block (HFE), a multi-temporal feature fusion block (FFB), and an image reconstruction block (REC). Following experiments on diverse data sources, the results demonstrate that ISSMSAR surpasses speckle reduction and super-resolution methods based on a single task in terms of both subjective perception and objective evaluation metrics regarding the quality of image restoration.