In multi-view multi-label learning, each object is represented by multiple data views, and belongs to multiple class labels simultaneously. Generally, all the data views have a contribution to the multi-label learning task, but their contributions are different. Besides, for each data view, each class label is only associated with a subset data features, and different features have different contributions to each class label. In this paper, we propose a novel framework VLSF for multi-view multi-label learning, i.e., multiview multi-label learning with View-Label-Specific Features. Specifically, we first learn a low dimensional label-specific data representation for each data view and construct a multi-label classification model based on it by exploiting label correlations and view consensus, and learn the contribution weight of each data view to multi-label learning task for all the class labels jointly. Then, the final prediction can be made by combing the prediction results of all the classifiers and the learned contribution weights. The extensive comparison experiments with the state-of-the-art approaches manifest the effectiveness of the proposed method VLSF. INDEX TERMS Multi-label learning, multi-view learning, view-label-specific features.
Multi-label image classification (MLIC) is a fundamental and practical task, which aims to assign multiple possible labels to an image. In recent years, many deep convolutional neural network (CNN) based approaches have been proposed which model label correlations to discover semantics of labels and learn semantic representations of images. This paper advances this research direction by improving both the modeling of label correlations and the learning of semantic representations. On the one hand, besides the local semantics of each label, we propose to further explore global semantics shared by multiple labels. On the other hand, existing approaches mainly learn the semantic representations at the last convolutional layer of a CNN. But it has been noted that the image representations of different layers of CNN capture different levels or scales of features and have different discriminative abilities. We thus propose to learn semantic representations at multiple convolutional layers. To this end, this paper designs a Multi-layered Semantic Representation Network (MSRN) which discovers both local and global semantics of labels through modeling label correlations and utilizes the label semantics to guide the semantic representations learning at multiple layers through an attention mechanism. Extensive experiments on four benchmark datasets including VOC 2007, COCO, NUS-WIDE, and Apparel show a competitive performance of the proposed MSRN against state-of-the-art models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.