Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2011
DOI: 10.1145/2020408.2020423
|View full text |Cite
|
Sign up to set email alerts
|

Integrating low-rank and group-sparse structures for robust multi-task learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
188
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 285 publications
(188 citation statements)
references
References 25 publications
0
188
0
Order By: Relevance
“…We ran our experiments over five random repetitions of a 20/80% train/test ratio, find regularization parameters using cross-validation, and report results on the test set where regression outputs are computed by averaging frame-level scores for GIFs in each emotion. We adopt the normalized mean squared error (nMSE) used in previous studies [1,5] for our experiments. The nMSE is defined as the mean squared error (MSE) divided by the variance of the target vector and assures that the error is not biased toward models that over or under predict.…”
Section: Multi-task Emotion Regressionmentioning
confidence: 99%
“…We ran our experiments over five random repetitions of a 20/80% train/test ratio, find regularization parameters using cross-validation, and report results on the test set where regression outputs are computed by averaging frame-level scores for GIFs in each emotion. We adopt the normalized mean squared error (nMSE) used in previous studies [1,5] for our experiments. The nMSE is defined as the mean squared error (MSE) divided by the variance of the target vector and assures that the error is not biased toward models that over or under predict.…”
Section: Multi-task Emotion Regressionmentioning
confidence: 99%
“…For example, [13][14][15][34][35][36] assume that there are a set of features (either in original space or in a transformed space) sharing for all tasks. There are also some multi-task learning algorithms using sparse constraints, such as 1 norm constraint [11] , 2, 1 norm constraint [37] , trace norm constraint [15,36] , and the combination of them such as 1 + 1 ,q norm multi-task learning [16] , sparse and low-rank multi-task learning [13] , robust multi-task learning using group sparse and low rank constraints [38] , robust multi-task feature learning [39] .…”
Section: Related Workmentioning
confidence: 99%
“…Different assumptions on task relatedness lead to various formulations. Examples include multi-task with joint feature learning [4,26] and sparse and low-rank multi-task learning [22], etc.…”
Section: Effectivenessmentioning
confidence: 99%