Cross-modal clustering (CMC) aims to enhance the clustering performance by exploring complementary information from multiple modalities. However, the performances of existing CMC algorithms are still unsatisfactory due to the conflict of heterogeneous modalities and the high-dimensional non-linear property of individual modality. In this paper, a novel deep mutual information maximin (DMIM) method for cross-modal clustering is proposed to maximally preserve the shared information of multiple modalities while eliminating the superfluous information of individual modalities in an end-to-end manner. Specifically, a multi-modal shared encoder is firstly built to align the latent feature distributions by sharing parameters across modalities. Then, DMIM formulates the complementarity of multi-modalities representations as an mutual information maximin objective function, in which the shared information of multiple modalities and the superfluous information of individual modalities are identified by mutual information maximization and minimization respectively. To solve the DMIM objective function, we propose a variational optimization method to ensure it converge to a local optimal solution. Moreover, an auxiliary overclustering mechanism is employed to optimize the clustering structure by introducing more detailed clustering classes. Extensive experimental results demonstrate the superiority of DMIM method over the state-of-the-art cross-modal clustering methods on IAPR-TC12, ESP-Game, MIRFlickr and NUS-Wide datasets.
Recently, the cross-modal analysis has drawn much attention due to the rapid growth and widespread emergence of multimodal data. It integrates multiple modalities to improve learning and generalization performance. However, most previous methods just focus on learning a common shared feature space for all modalities and ignore the private information hidden in each individual modality. To address this problem, we propose a novel shared-private information bottleneck (SPIB) method for crossmodal clustering. First, we devise a hybrid words model and a consensus clustering model to construct the shared information of multiple modalities, which capture the statistical correlation of low-level features and the semantic relations of the high-level clustering partitions, respectively. Second, the shared information of multiple modalities and the private information of individual modalities are maximally preserved through a unified information maximization function. Finally, the optimization of SPIB function is performed by a sequential ''draw-and-merge'' procedure, which guarantees the function converges to a local maximum. Besides, to solve the lack of tags in cross-modal social images, we also investigate the use of structured prior knowledge in the form of knowledge graph to enrich the information in semantic modality and design a novel semantic similarity measurement for social images. The experimental results on four types of cross-modal datasets demonstrate that our method outperforms the state-of-the-art approaches. INDEX TERMS Cross-modal clustering, information bottleneck, mutual information, knowledge graph, social images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.