The world provides us with data of multiple modalities. Intuitively, models fusing data from different modalities outperform unimodal models, since more information is aggregated. Recently, joining the success of deep learning, there is an influential line of work on deep multimodal learning, which has remarkable empirical results on various applications. However, theoretical justifications in this field are notably lacking. Can multimodal provably perform better than unimodal? In this paper, we answer this question under a most popular multimodal learning framework, which firstly encodes features from different modalities into a common latent space and seamlessly maps the latent representations into the task space. We prove that learning with multiple modalities achieves a smaller population risk than only using its subset of modalities. The main intuition is that the former has more accurate estimate of the latent space representation. To the best of our knowledge, this is the first theoretical treatment to capture important qualitative phenomena observed in real multimodal applications. Combining with experiment results, we show that multimodal learning does possess an appealing formal guarantee.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.