Existing image semantic segmentation methods favor learning consistent representations by extracting long-range contextual features with the attention, multi-scale, or graph aggregation strategies. These methods usually treat the misclassified and correctly classified pixels equally, hence misleading the optimization process and causing inconsistent intra-class pixel feature representations in the embedding space during learning. In this paper, we propose the auxiliary representation calibration head (RCH), which consists of the image decoupling, prototype clustering, error calibration modules and a metric loss function, to calibrate these error-prone feature representations for better intra-class consistency and segmentation performance. RCH could be incorporated into the hidden layers, trained together with the segmentation networks, and decoupled in the inference stage without additional parameters. Experimental results show that our method could significantly boost the performance of current segmentation methods on multiple datasets (e.g., we outperform the original HRNet and OCRNet by 1.1% and 0.9% mIoU on the Cityscapes test set). Codes are available at https://github.com/VipaiLab/RCH.
Data-free quantization has recently been a promising method to perform quantization without access to the original data. However, the drawback of such approaches is the homogenization of synthetic data due to low efficiency for diverse data generation and the performance collapse of the generator. To alleviate the above issue, we propose a novel Meta-BNS for adversarial data-free quantization scheme which consists of Meta-BNS module and adversarial exploration module. Meta-BNS module automatically learns an enhancement coefficient matrix function for BN loss module to provide a suitable constrain on the generator. Adversarial exploration module leverages minimax game between the generator and quantized model via input gradient to encourage the generator to learn high-dimensional and complex real data distribution. The experimental results show that our method achieves state-of-the-art performance for various settings on data-free quantization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.