Learning unimodal representations and improving multimodal fusion are two cores of multimodal sentiment analysis (MSA). However, previous methods ignore the information differences between different modalities: Text modality has high-order semantic features than other modalities. In this article, we propose a sparse-and cross-attention (SCANET) framework which has asymmetric architecture to improve performance of multimodal representation and fusion. Specifically, in the unimodal representation stage, we use sparse attention to improve the representation efficiency of two modalities and reduce the low-order redundant features of audio and visual modalities. In the multimodal fusion stage, we design an innovative asymmetric fusion module, which utilizes audio and visual modality information matrix as weights to strengthen the target text modality. We also introduce contrastive learning to effectively enhance complementary features between modalities. We apply SCANET on the CMU-MOSI and CMU-MOSEI datasets, and experimental results show that our proposed method achieves state-of-the-art performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.