Stereo image super-resolution exploits additional features from cross view image pairs for high resolution (HR) image reconstruction. Recently, several new methods have been proposed to investigate cross view features along epipolar lines to enhance the visual perception of recovered HR images. Despite the impressive performance of these methods, global contextual features from cross view images are left unexplored. In this paper, we propose a cross view capture network (CVCnet) for stereo image super-resolution by using both global contextual and local features extracted from both views. Specifically, we design a cross view block to capture diverse feature embeddings from the views in stereo vision. In addition, a cascaded spatial perception module is proposed to redistribute each location in feature maps according to the weight it occupies to make the extraction of features more effective. Extensive experiments demonstrate that our proposed CVCnet outperforms the state-of-the-art image super-resolution methods to achieve the best performance for stereo image super-resolution tasks. The source code is available at https://github.com/xyzhu1/CVCnet.
Reference-based image super-resolution (RefSR) aims to reconstruct high-resolution (HR) images from low-resolution (LR) images by introducing HR reference images. The key step of RefSR is to transfer reference features to LR features. However, existing methods still lack an efficient transfer mechanism, resulting in blurry details in the generated image. In this paper, we propose a double-layer search module and an adaptive pooling fusion module group for reference-based image super-resolution, called DLASR. Based on the re-search strategy, the double-layer search module can produce an accurate index map and score map. These two maps are used to filter out accurate reference features, which greatly increases the efficiency of feature transfer in the later stage. Through two continuous feature-enhancement steps, the adaptive pooling fusion module group can transfer more valuable reference features to the corresponding LR features. In addition, a structure reconstruction module is proposed to recover the geometric information of the images, which further improves the visual quality of the generated image. We conduct comparative experiments on a variety of datasets, and the results prove that DLASR achieves significant improvements over other state-of-the-art methods, in terms of quantitative accuracy and qualitative visual effect. The code is available at https://github.com/clttyou/DLASR.
Scene text image super-resolution (STISR) in the wild has been shown to be beneficial to support improved vision-based text recognition from low-resolution imagery. An intuitive way to enhance STISR performance is to explore the well-structured and repetitive layout characteristics of text and exploit these as prior knowledge to guide model convergence. In this paper, we propose a novel gradient-based graph attention method to embed patch-wise text layout contexts into image feature representations for high-resolution text image reconstruction in an implicit and elegant manner. We introduce a non-local group-wise attention module to extract text features which are then enhanced by a cascaded channel attention module and a novel gradient-based graph attention module in order to obtain more effective representations by exploring correlations of regional and local patch-wise text layout properties. Extensive experiments on the benchmark TextZoom dataset convincingly demonstrate that our method supports excellent text recognition and outperforms the current state-of-the-art in STISR. The source code is available at https://github.com/xyzhu1/TSAN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.