Super-resolution (SR) is an ill-posed problem. Generating high-resolution (HR) images from low-resolution (LR) images remains a major challenge. Recently, SR methods based on deep convolutional neural networks (DCN) have been developed with impressive performance improvement. DCN-based SR techniques can be largely divided into peak signal-to-noise ratio (PSNR)-oriented SR networks and generative adversarial networks (GAN)-based SR networks. In most current GAN-based SR networks, the perceptual loss is computed from the feature maps of a single layer or several fixed layers using a differentiable feature extractor such as VGG. This limited layer utilization may produce overly textured artifacts. In this paper, a new edge texture metric (ETM) is proposed to quantify the characteristics of images and then it is utilized only in the training phase to select an appropriate layer when calculating the perceptual loss. We present experimental results showing that the GAN-based SR network trained with the proposed method achieves qualitative and quantitative perceptual quality improvements compared to many of the existing methods.
INDEX TERMSArtificial neural networks, computer vision, image enhancement, image resolution VOLUME 4, 2016This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.