The acceptable results of deep learning led to the use of the deep neural network on a wide range of models, including image super-resolution. The performance of the deep neural network is directly affected by its loss function. Most methods use intensity loss, such as MSE, which computes the difference between the predicted image and the ground truth. Since the structural information of a scene is more sensitive to the human visual system, it is desired that the loss function could measure the impact of the structural error. In addition, the use of screen content images has become widespread because of many applications such as desktop-sharing and remote computing. As a result, super-resolution of screen content images becomes a crucial technique to enhance the quality of low-resolution images. In the presented loss function, the structural error is weighted by employing DCT components. The model is trained and tested using the screen content images, and the experimental subjective and objective results illustrate the effectiveness of the presented loss for screen content images.