When traditional super-resolution reconstruction methods are applied to infrared thermal images, they often ignore the problem of poor image quality caused by the imaging mechanism, which makes it difficult to obtain high-quality reconstruction results even with the training of simulated degraded inverse processes. To address these issues, we proposed a thermal infrared image super-resolution reconstruction method based on multimodal sensor fusion, aiming to enhance the resolution of thermal infrared images and rely on multimodal sensor information to reconstruct high-frequency details in the images, thereby overcoming the limitations of imaging mechanisms. First, we designed a novel super-resolution reconstruction network, which consisted of primary feature encoding, super-resolution reconstruction, and high-frequency detail fusion subnetwork, to enhance the resolution of thermal infrared images and rely on multimodal sensor information to reconstruct high-frequency details in the images, thereby overcoming limitations of imaging mechanisms. We designed hierarchical dilated distillation modules and a cross-attention transformation module to extract and transmit image features, enhancing the network’s ability to express complex patterns. Then, we proposed a hybrid loss function to guide the network in extracting salient features from thermal infrared images and reference images while maintaining accurate thermal information. Finally, we proposed a learning strategy to ensure the high-quality super-resolution reconstruction performance of the network, even in the absence of reference images. Extensive experimental results show that the proposed method exhibits superior reconstruction image quality compared to other contrastive methods, demonstrating its effectiveness.
The deep convolutional neural network has achieved great success in the Single Image Super-resolution task. It is obviously that among the well-known super-resolution methods, the deep learning-based algorithms show the most advanced performance. However, the most advanced algorithms currently use complex networks with a large number of parameters, which makes it difficult to apply deep learning algorithms on mobile devices. To solve this problem, we propose a lightweight dualresidual network(LDRN) for single image super-resolution, which has better reconstruction quality than most current advanced lightweight algorithms. Due to its fewer parameters and computational expense, real-time and mobile applications of our networks can be easily realized. On the basis of the residual module, we propose a new residual unit, which uses two depthwise separable(DW) convolution to obtain better balance between feature extraction capacity and lightweight performance. We further design a dualstream residual block, which contains a multiplication branch and an addition branch. The dual-stream residual block can improve the reconstruction performance more effectively than expanding the network width. In addition, we also designed a new up-sampling module to simplify the previous up-sampling methods. Extensive experimental results show that our network has better reconstruction performance and lightweight performance than most existing state-of-the-art algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.