Deep learning techniques and deep networks have recently been extensively studied and widely applied to single image super-resolution (SR). Among them, channel attention has garnered the most focus owing to its significant boost in the presentational power of a convolutional neural network. However, the original channel attention neglects the critical importance of the positional information, thus imposing performance limitations. Here, a novel perspective, namely, a coordinate attention mechanism, is explored to alleviate the aforementioned problem, and accordingly result in an enhanced SR performance. Specifically, a deep residual coordinate attention SR network (COSR) is proposed, which mainly incorporates the presented coordinate attention blocks into a deep nested residual structure. The coordinate attention captures the positional information by computing the average value vector from the two spatial directions, thus aggregating the features in different coordinates. The nested residual blocks pass low-frequency information from the top to the end through the skip connection lines, allowing convolution filters to concentrate more on high-frequency textures and edges, thereby reducing the difficulty of reconstruction. Extensive experiments demonstrate that our proposed COSR achieves a better performance and exceeds many state-of-the-art SR methods in terms of both quantitative metrics and visual quality.