2019
DOI: 10.1007/s00521-019-04577-z
|View full text |Cite
|
Sign up to set email alerts
|

Multi-view representation learning in multi-task scene

Abstract: Over recent decades have witnessed considerable progress in whether multi-task learning or multiview learning, but the situation that consider both learning scenes simultaneously has received not too much attention. How to utilize multiple views' latent representation of each single task to improve each learning task's performance is a challenge problem. Based on this, we proposed a novel semi-supervised algorithm, termed as Multi-Task Multi-View learning based on Common and Special Features (MTMVCSF). In gene… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 48 publications
0
4
0
Order By: Relevance
“…Specifically, HOG is edge distributions of objects c based on gradient features. LBP has a powerful function of representing object Hu moment invariants are fast and can better describe larger objects in the imag dition, to ensure the quality of extracted features, a simple and effective illumina malization method is adopted before feature extraction [19]. The unit norm norm is applied to extract feature vectors for each particle's viewing angle [46].…”
Section: Multiple Feature Extraction Methods For Target Identificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, HOG is edge distributions of objects c based on gradient features. LBP has a powerful function of representing object Hu moment invariants are fast and can better describe larger objects in the imag dition, to ensure the quality of extracted features, a simple and effective illumina malization method is adopted before feature extraction [19]. The unit norm norm is applied to extract feature vectors for each particle's viewing angle [46].…”
Section: Multiple Feature Extraction Methods For Target Identificationmentioning
confidence: 99%
“…To identify targets more robustly, researchers have proposed a multi-task and multiperspective learning method to optimize the target identification problem [18,19]. In this paper, we propose a target identification method based on multi-task multi-view sparse learning (MVMT).…”
Section: Introductionmentioning
confidence: 99%
“…SPLIT [24] learns view-wise weights and saves task correlations by multiplicatively decomposing the basis matrix. MTMVCSF [20] captures both consistent and complementary information by latent feature representation of multiple views. These methods can be considered as matrix-based methods, because they are limited to model first-order feature interactions by organizing model weights in a simple matrix form.…”
Section: Related Workmentioning
confidence: 99%
“…Canonical correlation analysis (CCA) [16] and co-training [17] are always viewed as the early work of multi-view learning, and scholars have used them as a basis to develop many variants in this field. According to Sun's book [18] (the first book on multi-view learning), the research topics in this area mainly include: multiview supervised [19,20] and semi-supervised [21,22] learning, multi-view subspace learning [23,24], multi-view clustering [25,26,27], multi-view active learning [28,29], multi-view transfer [30] and multi-task learning [31,9], multi-view deep learning [32], and view construction [33]. Besides, the field can be widely used in plenty of applications, such as computer vision [34], social network [35], recommendation Systems [36], and medical research [37].…”
Section: Multi-view Learningmentioning
confidence: 99%
“…Similar to other multi-view learning algorithms [8,9], multiview subspace clustering also focus on multiple views' consistency and complementarity, where consistency represents views' consistent subspace which can be achieved by low-rank regularization [4] or views alignment [5], and complementarity usually focus on the view-specific subspace which can be achieved by frobenius regularization [6]. Subspace clustering works well under the assumption that sample space has relatively separable decision boundaries, when facing highly nonlinear data, only considering the regularization of self-representation matrix can not effectively improve the clustering performance due to the limitation of input representation.…”
Section: Introductionmentioning
confidence: 99%