The fusion quality of visible and infrared images is very important for subsequent human understanding of image information and target processing. The fusion quality of the existing infrared and visible image fusion methods still has room for improvement in terms of image contrast, sharpness and richness of detailed information. To obtain better fusion performance, an infrared visible image fusion algorithm based on latent low-rank representation (LatLRR) nested with rolling guided image filtering (RGIF) is proposed that is a novel solution that integrates two-level decomposition and three-layer fusion. First, an image is decomposed using LatLRR to obtain the low-rank sublayer, saliency sublayer, and sparse noise sublayer. Then, RGIF is used to perform further multiscale decomposition of the low-rank sublayer to extract multiple detail layers, which are fused using convolutional neural network (CNN)-based fusion rules to obtain the detail-enhanced layer. Next, an algorithm based on improved visual saliency mapping with weighted guided image filtering (IVSM-GIF) is used to fuse the low-rank sublayer, and an algorithm for adaptive weighting of regional energy features based on Laplacian pyramid decomposition is used to fuse the saliency sublayer. Finally, the fused low-rank sublayer, saliency sublayer, and detail-enhanced layer are used to reconstruct the final image. The experimental results show that the proposed method outperforms other state-of-the-art fusion methods in terms of visual quality and objective evaluation, achieving the highest average values in six objective evaluation metrics.
INDEX TERMSImage fusion, rolling guided image filtering, latent low-rank representation, detailenhanced layer.