By leveraging the characteristics of different optical sensors, infrared and visible image fusion generates a fused image that combines prominent thermal radiation targets with clear texture details. Existing methods often focus on a single modality or treat two modalities equally, which overlook the distinctive characteristics of each modality and fail to fully utilize their complementary information. To address this problem, we propose an end-to-end infrared and visible image fusion model based on shared-individual multi-scale feature decomposition. First, to extract multi-scale features from source images, a symmetric multi-scale decomposition encoder consisting of nest connections and a multi-scale receptive field network is designed to capture small, medium, and large-scale features. Second, to sufficiently utilize complementary information, common edge feature maps are introduced to the feature decomposition loss function to decompose extracted features into shared and individual features. Third, to aggregate shared and individual features, a sharedindividual self-augmented decoder is proposed to take the individual fusion feature maps as the main input and the shared fusion feature maps as the residual input to assist the decoding process and the reconstruct the fused image. Finally, through comparing subjective evaluations and objective metrics, our method demonstrates its superiority compared with the state-of-the-art approaches.