Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/416
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Multi-Modal Learning with Incomplete Modalities

Abstract: In real world applications, data are often with multiple modalities. Researchers proposed the multimodal learning approaches for integrating the information from different modalities. Most of the previous multi-modal methods assume that training examples are with complete modalities. However, due to the failures of data collection, selfdeficiencies and other various reasons, multi-modal examples are usually with incomplete feature representation in real applications. In this paper, the incomplete feature repre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
15
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 29 publications
(15 citation statements)
references
References 10 publications
0
15
0
Order By: Relevance
“…Recently, (Xu et al 2018) sought a latent space and then performed data reconstruction for partial multiview subspace representation. (Yang et al 2018b) leveraged the intrinsic and extrinsic information together to yield an inductive learner SLIM for semi-supervised scenarios. It can also be readily adopted to either classification or clustering tasks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, (Xu et al 2018) sought a latent space and then performed data reconstruction for partial multiview subspace representation. (Yang et al 2018b) leveraged the intrinsic and extrinsic information together to yield an inductive learner SLIM for semi-supervised scenarios. It can also be readily adopted to either classification or clustering tasks.…”
Section: Related Workmentioning
confidence: 99%
“…may be approximately 90% in industrial data (Little and Rubin 2014). Besides, each view may suffer from missing some instances (Xiang et al 2013;Xu, Tao, and Xu 2015;Yang et al 2018b). This situation typically refers to partial multi-view data, which is common in practical applications (Cai et al 2018;Zheng et al 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Existing multi-modal learning approaches cannot directly apply on the incomplete modal situation unless removing the incomplete instances, yet the model trained will clearly loses information. Aiming at this issue, there are some preliminary investigations, Shao et al (2016) learned the latent feature matrices for each incomplete modality and pushes them towards a common consensus; Yang et al (2018) utilized the extrinsic information from unlabeled data against the insufficiencies brought by the incomplete modal issues. However, these methods are mainly linear methods, which are difficult to extend to non-linear situation, and rarely consider the inconsistent anomalies in the complete situation.…”
Section: Introductionmentioning
confidence: 99%
“…For each dataset, we randomly select 20% data for the test (query) set and the remaining instances are used for training. Considering that the FLICKR25K, IAPR TC-12, WIKI and NUS-WIDE are completely in raw data, we first conduct the experiments on completely data, then conduct more experiments on segmented incomplete data as in (Yang et al 2018). To demonstrate the generalization ability, we also experiment on the real-world incomplete multi-modal dataset, i.e., WKG Game-Hub, which contains 27,276 instances with two modalities, and 4946 instances appear with To verify the learned feature representations of our method, we examine the task of cross-modal retrieval.…”
mentioning
confidence: 99%
“…It is thus essential that multimodal approaches can make predictions when only one data type is available. Unfortunately, existing multimodal approaches with such capability [49,50,51,52] do not focus on language and graphs. Further, missing data imputation techniques use adversarial learning to impute missing values [53,54], but the imputed values can introduce unwanted data bias.…”
mentioning
confidence: 99%