2017 International Joint Conference on Neural Networks (IJCNN) 2017
DOI: 10.1109/ijcnn.2017.7966296
|View full text |Cite
|
Sign up to set email alerts
|

Learning from semantically dependent multi-tasks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 30 publications
0
3
0
Order By: Relevance
“…Our work is motivated by recent advances in multiview learning [28,27,26,29], multiple kernel learning [46,31], tensor analysis [56,36], graph learning [30], and adversarial learning [55,34]. Compared to previous works on multi-source sentiment analysis, our proposed framework focuses on transferring knowledge from multi-source domains with an ability of effective utilization of unlabeled data in an end-to-end way.…”
Section: Cross-domain Sentiment Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Our work is motivated by recent advances in multiview learning [28,27,26,29], multiple kernel learning [46,31], tensor analysis [56,36], graph learning [30], and adversarial learning [55,34]. Compared to previous works on multi-source sentiment analysis, our proposed framework focuses on transferring knowledge from multi-source domains with an ability of effective utilization of unlabeled data in an end-to-end way.…”
Section: Cross-domain Sentiment Analysismentioning
confidence: 99%
“…Nevertheless, the real situations are: (1) there often exist many domains of comments in the meanwhile; and (2) there is an adequate amount of labeled training data for every domain of interest is typically impractical, which makes the further study on how to handle with the limited-resource multiple domains of data necessary and worthwhile. To this end, a series of powerful multi-task models, which consider the system learning for each domain as an independent task so that each task can reinforce and complement each other, is established [52,38,39,12,11,21,59,61,13,37,38,49,36,54]. Among them, a group of representative studies employ a shared-private model [37,52,38,39,12,11,21,59,61], which introduces two feature spaces for any task: one is used to store task-dependent features, the other is used to capture shared features.…”
Section: Introductionmentioning
confidence: 99%
“…It has been shown that learning multiple related tasks simultaneously can be advantageous relative to learning these tasks independently [34,35]. Multi-task learning (MTL) methods are firstly developed for classification problems and can be classified into two different types: regularization-based learning and joint feature learning.…”
Section: Multi-task Clusteringmentioning
confidence: 99%