Image fusion is a notable concern in the image processing field which helps to produce a resultant image by amalgamating all the features from one or more source images. We envisioned an infrared (IR) and visible (VIS) image fusion technique combining (Latent Low Rank Representation) LatLRR and ResNet to mitigate the loss of contrast, texture, spectral, and spatial details in the resultant image. The fused image seems to be more appropriate and obvious for human and machine vision perception. Using LatLRR, the source images are initially broken down into saliency and latent low rank parts. Then, using weighted-average method, the Latent Low Rank components are integrated. Fusion approach based on deep features is used to overcome the problem of information lost by using RESNET to extract multilayer features for the saliency part as this network has a skip connection which helps to gather information from every layer. After that, the deep features are normalized by nuclear norm, and preliminary weight maps are generated. Combining the original weight maps with a Soft-Max operation yields the final-weight maps. At last, by summing the fused Latent Low Rank component and the Saliency content, the fused image will be reconstructed. Both visual and quantitative performance evaluation is done for the proposed method with five existing methods and it is inferred from the results that the proposed method can well reconstruct the fused image without haloes and artifacts.