Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/402
|View full text |Cite
|
Sign up to set email alerts
|

Deep Multi-View Concept Learning

Abstract: Multi-view data is common in real-world datasets, where different views describe distinct perspectives. To better summarize the consistent and complementary information in multi-view data, researchers have proposed various multi-view representation learning algorithms, typically based on factorization models. However, most previous methods were focused on shallow factorization models which cannot capture the complex hierarchical information. Although a deep multi-view factorization model has been proposed rece… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(17 citation statements)
references
References 11 publications
0
17
0
Order By: Relevance
“…For example, how humans require only one or a few examples to acquire a concept is incorporated through one-shot or few-shot learning or how known concepts can be used to recognize new exemplars is achieved through incremental learning and memory modules. Many more approaches to concept learning using deep learning techniques exist (e.g., Wang et al, 2015 ; Dolgikh, 2018 ; Xu et al, 2018 ; Rodriguez et al, 2019 ). In general, these approaches yield high levels of accuracy but require huge amounts of training data and/or training time.…”
Section: Related Workmentioning
confidence: 99%
“…For example, how humans require only one or a few examples to acquire a concept is incorporated through one-shot or few-shot learning or how known concepts can be used to recognize new exemplars is achieved through incremental learning and memory modules. Many more approaches to concept learning using deep learning techniques exist (e.g., Wang et al, 2015 ; Dolgikh, 2018 ; Xu et al, 2018 ; Rodriguez et al, 2019 ). In general, these approaches yield high levels of accuracy but require huge amounts of training data and/or training time.…”
Section: Related Workmentioning
confidence: 99%
“…In fact, exploring consistent or complementary information among multiple views is an important research direction [10]. Recently, [12,13] have also shown that simultaneously discerning these two kinds of information can achieve better representation learning, but they belong to semi-supervised learning-based methods, i.e., partial label information of multi-view data must be provided. Therefore, it is still worth researching how to learn a low-dimensional representation with consistent and complementary information across multiple views via neural networks for multi-view clustering.…”
Section: Introductionmentioning
confidence: 99%
“…Inspired by the recent amazing success of deep learning in feature learning (Hinton and Salakhutdinov 2006), a surge of multi-view learning based on deep neural networks (DNN) are proposed (Ngiam et al 2011;Wang et al 2015;Xu et al 2018). First, Ngiam et al (Ngiam et al 2011) explored extracting shared representations by training a bimodal deep autoencoders.…”
Section: Introductionmentioning
confidence: 99%
“…However, unfortunately the aforementioned can only be feasible to the two-view case, failing to handle the multi-view one. To explicitly summarize the consensus and complementary information in multi-view data, a Deep Multi-view Concept learning (DMCL) (Xu et al 2018) is presented by performing non-negative factorization on every view hierarchically.…”
Section: Introductionmentioning
confidence: 99%