No-reference segmentation quality evaluation aims to evaluate the quality of image segmentation without any reference image during the application process. It usually depends on certain quality criteria to describe a good segmentation with some prior knowledge. Therefore, there is a need for a precise description of the objects in the segmentation and an integration of the representation in the evaluation process. In this paper, from the perspective of understanding the semantic relationship between the original image and the segmentation results, we propose a feature contrastive learning method. This method can enhance the performance of no-reference segmentation quality evaluations and be applied in semantic segmentation scenarios. By learning the pixel-level similarity between the original image and the segmentation result, a contrastive learning step is performed in the feature space. In addition, a class activation map (CAM) is used to guide the evaluation, making the score more consistent with the human visual judgement. Experiments were conducted on the PASCAL VOC2012 dataset, with segmentation results obtained by state-of-the-art (SoA) segmentation methods. We adopted two meta-measure criteria to validate the efficiency of the proposed method. Compared with other no-reference evaluation methods, our method achieves a higher accuracy which is comparable to the supervised evaluation methods and partly even exceeds them.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.