Image is an important information source for human perception and machine pattern recognition. Image quality determines the accuracy and sufficiency of the information obtained. With the rapid development of deep learning in image processing, no-reference image quality assessment (NR-IQA) plays a significant role. Currently, most NR-IQA methods mainly use the global features of images without paying attention to the detail-rich local features and the dependencies between channels. There are subtle differences in detail between distorted and reference images, and the contribution of different channels to image quality assessment differs. Additionally, multi-scale feature extraction can be used to fuse the detailed information of images with different resolutions. The combination of global and local features plays a vital role in extracting image features. Therefore, a multi-scale residual convolutional neural network with an attention mechanism (MsRCANet) is proposed for NR-IQA. First, the network extracts global features and processes local features. Specifically, a multi-scale residual block is used to extract features from distorted images. Then, the residual learning with active weighted mapping strategy and channel attention mechanism is used to further process image features to obtain more abundant high-frequency information. Finally, the fusion strategy and full connection layer are used to evaluate the quality of the proposed network. The experimental results on four synthetic databases and three in-the-wild IQA databases, as well as validation results on cross-database, show that the proposed method has good generalization ability and can be compared with the most advanced methods.