Unpaired image-to-image translation for the generation field has made much progress recently. However, these methods suffer from mode collapse because of the overfitting of the discriminator. To this end, we propose a straightforward method to construct a contrastive loss using the feature information of the discriminator output layer, which is named multi-feature contrastive learning (MCL). Our proposed method enhances the performance of the discriminator and solves the problem of model collapse by further leveraging contrastive learning. We perform extensive experiments on several open challenge datasets. Our method achieves state-of-the-art results compared with current methods. Finally, a series of ablation studies proved that our approach has better stability. In addition, our proposed method is also practical for single image translation tasks. Code is available at https://github.com/gouayao/MCL.
With the advantage of using only a limited number of samples, few-shot learning has been developed rapidly in recent years. It is mostly applied in the object classification or detection of a small number of samples which is typically less than ten. However, there is not much research related to few-shot detection, especially one-shot detection. In this paper, the multifeature information-assisted one-shot detection method is proposed to improve the accuracy of one-shot object detection. Specifically, two auxiliary modules are applied to the detection algorithm: Semantic Feature Module (SFM) and Detail Feature Module (DFM), which, respectively, extract semantic feature information and detailed feature information of samples in the support set. Then these two kinds of information are then calculated with the feature image extracted from the query image to obtain the corresponding auxiliary information that is used to complete one-shot detection. Thanks to the two auxiliary modules, which can retain more semantic and detailed information of samples in the support set, the proposed method can enhance the utilization rate of sample feature information and improve object detection accuracy by 2.97% compared to the benchmark method.
To solve the problem that the feature maps generated by feature extraction network of traditional weakly supervised learning object detection algorithm is not strong in feature, and the mapping relationship between feature space and classification results is not strong, which restricts the performance of object detection, a weakly supervised object detection algorithm based on strong representation learning is proposed in this paper. Due to enhance the representation ability of feature maps, the algorithm weighted the channels of feature maps according to the importance of each channel, to strengthen the weight of crucial feature maps and ignore the significance of secondary feature maps. Meanwhile, a Gaussian Mixture distribution model with better classification performance was used to design the object instance classifier to enhance further the representation of the mapping between feature space and classification results, and a large-margin Gaussian Mixture (L-GM) loss was designed to increase the distance between sample categories and improve the generalization of the classifier. For verifying the effectiveness and advancement of the proposed algorithm, the performance of the proposed algorithm is compared with six classical weakly supervised target detection algorithms on VOC datasets. Experiments show that the weakly supervised target detection algorithm based on strong representation learning has outperformed other classical algorithms in average accuracy (AP) and correct location (CorLoc), with increases of 1.1%~14.6% and 2.8%~19.4%, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.