Integrating superpixel segmentation into convolutional neural networks is known to be effective in enhancing the accuracy of land-cover classification. However, most of existing methods accomplish such integration by focusing on the development of new network architectures, which suffer from several flaws: 1) conflicts between general superpixels and semantic labels introduce noise into the training, especially at object boundaries; 2) absence of training guidance for superpixels leads to ineffective regional feature learning; 3) unnecessary superpixel segmentation in the testing stage not only increases the computational burden but also incurs jagged edges. In this study, we propose a novel semantic-aware region (SARI) loss to guide the effective learning of regional features with superpixels for accurate land-cover classification. The key idea of the proposed method is to reduce the feature variance inside and between homogeneous superpixels while enlarging feature discrepancy between heterogeneous ones. The SARI loss is thus designed with three sub-parts, including superpixel variance loss, intra-class similarity loss and interclass distance loss. We also develop semantic superpixels to assist in the network training with SARI loss while overcoming the limitations of general superpixels. Extensive experiments on two challenging datasets demonstrate that the SARI loss can facilitate regional feature learning, achieving state-of-the-art performance with mIoU scores of around 97.11% and 73.99%