In image super‐resolution, deep neural networks with various attention mechanisms have achieved noticeable performance in recent years, for example, channel attention and layer attention. Although many researchers have achieved good super‐resolution results with only a certain style of attention, the divergence and the complementarity focused by multiple attention mechanisms are ignored. In addition, most of these methods fail to utilize the diverse information from multi‐scale features. To efficiently manipulate the above rich information, this paper strives to combine multi‐scale structure and multi‐attention schemes in architecture and module levels for super‐resolution. Especially, in the architecture level, a fused pyramid attention network is developed to extract deep features with the multi‐scale context information from multiple different sizes of receptive field recurrently with skip connections. For the module level, a fused pyramid attention module is designed to fuse the two attention mechanisms to further refine the deep features with fine‐grained information. Compared with the common fusion strategy, the adopted feature fusion structure can maintain better structural information while establishing long‐range dependency. Extensive experimental results demonstrate that the proposed network achieves favorable performance quantitatively and visually.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.