The extraction and proper utilization of convolution neural network (CNN) features have a significant impact on the performance of image super-resolution (SR). Although CNN features contain both the spatial and channel information, current deep techniques on SR often suffer to maximize performance due to using either the spatial or channel information. Moreover, they integrate such information within a deep or wide network rather than exploiting all the available features, eventually resulting in high computational complexity. To address these issues, we present a binarized feature fusion (BFF) structure that utilizes the extracted features from residual groups (RG) in an effective way. Each residual group (RG) consists of multiple hybrid residual attention blocks (HRAB) that effectively integrates the multiscale feature extraction module and channel attention mechanism in a single block. Furthermore, we use dilated convolutions with different dilation factors to extract multiscale features. We also propose to adopt global, short and long skip connections and residual groups (RG) structure to ease the flow of information without losing important features details. In the paper, we call this overall network architecture as hybrid residual attention network (HRAN). In the experiment, we have observed the efficacy of our method against the state-of-the-art methods for both the quantitative and qualitative comparisons.
We propose a new hardware-friendly super-resolution (SR) algorithm using computationally simple feature extraction and regression methods, i.e., local binary pattern (LBP) and linear mapping, respectively. The proposed method pre-trains dedicated linear mapping kernels for different texture types of low-resolution (LR) image patches where the texture type is classified based on LBP features. On inference operation, a high-resolution (HR) image patch is reconstructed by multiplying an LR image patch with a linear mapping kernel, which is inferred by the LBP feature class of the corresponding LR patch. Since, the LBP is a highly efficient feature extraction operator for local texture classification, our method is extremely fast and power-efficient while showing competitive reconstruction quality to the latest machine learningbased SR techniques. We also present a fully pipe-lined hardware architecture and its implementation for real-time operations of the proposed SR method. The proposed SR algorithm has been implemented on a field-programmable-gate-array (FPGA) platform including Xilinx KCU105 that can process 63 frames-persecond (fps) while converting full-high-definition (FHD) images to 4K ultra-high-definition (UHD) images. Extensive experimental results show that the proposed proposed algorithm and its hardware implementation can achieve high reconstruction performance compared to the latest machine-learning-based SR methods while utilizing minimum hardware resources, thereby having remarkably less computational complexity. Sometimes, the latest deep-learning-based SR approaches offer slightly higher reconstruction quality, but they require significantly larger amount of hardware resources than the proposed method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.