When implementing a super-resolution (SR) model on an edge device, it is common to train the model on a cloud using pre-determined training images. This is due to the lack of large-scale training data and computation power available on the edge device. However, such frameworks may encounter a domain gap issue because input images to these devices often have different characteristics than those used in training. Therefore, it is essential to continually update the model parameters through on-device learning, which takes into account the limited computation power of edge devices and makes use of on-site input images. In this paper, we present a fast and efficient on-device learning framework for an SR model that aims to overcome the challenges posed by restricted computation and domain gap issues. Specifically, we propose an architecture for training the SR model in a quantized domain, which helps to reduce the quantization errors that accumulate during training. Additionally, we propose cost-constrained gradient pruning and meta-learning-based fast training schemes to enhance restoration performance within a smaller number of iterations. Experimental results show that our approach can maintain the restoration performance for unseen inputs on a lightweight model achieved by our quantization scheme.