Single-image super-resolution technology has been widely studied in various applications to improve the quality and resolution of degraded images acquired from noise-sensitive low-resolution sensors. As most studies on single-image super-resolution focused on the development of deep learning networks operating on high-performance GPUs, this study proposed an efficient and lightweight super-resolution network that enables real-time performance on mobile devices. To replace the relatively slow element-wise addition layer on mobile devices, we introduced a skip connection layer by directly concatenating a lowresolution input image with an intermediate feature map. In addition, we introduced weighted clipping to reduce the quantization errors commonly encountered during float-to-int8 model conversion. Moreover, a reparameterization method was selectively applied without increasing the cost in terms of inference time and number of parameters. Based on the contributions, the proposed network has been recognized as the best solution in Mobile AI & AIM 2022 Real-Time Single-Image Super-Resolution Challenge with PSNR of 30.03 dB and NPU runtime of 19.20 ms.
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.