How Deep Neural Networks (DNNs) best cope with the understanding of multimedia contents still remains an open problem, mainly due to two factors. First, conventional DNNs cannot effectively learn the representations of the images with sparse visual information. For example, the images describing knowledge concepts in textbooks. Second, existing DNNs cannot effectively capture the fine‐grained interactions between the images and text descriptions. To address these issues, we propose a deep Cross‐Media Grouping Fusion Network (CMGFN), which mainly has two distinctive properties: 1) CMGFN can effectively learn visual features from the images with sparse visual information. This is achieved by first progressively adjusting the attention of convolution filters to valuable visual regions, and then enhancing the use of key visual information in feature construction. 2) By a cross‐media grouping co‐attention mechanism, CMGFN can effectively use the interactions between visual features of different semantics and textual descriptions, to learn cross‐media features representing different fine‐grained semantics in different groups. Empirical studies demonstrate that CMGFN not only achieves state‐of‐the‐art performance on the multimedia documents containing sparse visual information, but also shows superior general applicability on other multimedia data, e.g., the multimedia fake news.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.