2020
DOI: 10.48550/arxiv.2009.01591
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Large Dimensional Analysis and Improvement of Multi Task Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…As shown in an enlarging spectrum of articles, the large dimensional behavior of Q has immediate further implications to the performance behavior of many machine learning algorithms, ranging from support vector machines (Kammoun & Alouini, 2020;Huang, 2017) to semi-supervised graph inference (Mai & Couillet, 2018), transfer and multi-task learning (Tiomoko et al, 2020), random feature map learning (Liao & Couillet, 2018b;Pen-3 This may at first be thought to follow from strong feature covariance (thus not close to Ip), but it turns out that in-sample correlation is even stronger as produced GAN images (or at least their associated VGG features) have in effect a very low variability. nington & Worah, 2019), or neural network dynamics (Liao & Couillet, 2018a;Advani et al, 2020), to cite a few.…”
Section: Discussionmentioning
confidence: 99%
“…As shown in an enlarging spectrum of articles, the large dimensional behavior of Q has immediate further implications to the performance behavior of many machine learning algorithms, ranging from support vector machines (Kammoun & Alouini, 2020;Huang, 2017) to semi-supervised graph inference (Mai & Couillet, 2018), transfer and multi-task learning (Tiomoko et al, 2020), random feature map learning (Liao & Couillet, 2018b;Pen-3 This may at first be thought to follow from strong feature covariance (thus not close to Ip), but it turns out that in-sample correlation is even stronger as produced GAN images (or at least their associated VGG features) have in effect a very low variability. nington & Worah, 2019), or neural network dynamics (Liao & Couillet, 2018a;Advani et al, 2020), to cite a few.…”
Section: Discussionmentioning
confidence: 99%
“…x,t , i.e., to a matrixvector multiplication with matrix size p × n of complexity O(n 2 ) (recall that p ∼ n). This is quite unlike competing methods: MTL-LSSVM proposed in [48] solves a system of n linear equations, for a complexity of order O(n 3 ); MTL schemes derived from SVM (CDLS [23], MMDT [22]) also have a similar O(n 3 ) complexity, these algorithms solving a quadratic programming problem [11]; besides, in these works, a step of model selection via cross validation needs be performed, which increases the algorithm complexity while simultaneously discarding part of the training data for validation.…”
Section: Complexity Of the Spca-mtl Algorithmmentioning
confidence: 95%
“…On the MTL side, several methods were proposed under unsupervised [32,45,6], semi-supervised [40,30] and supervised (parameter-based [48,16,51,1] or feature-based [2,29]) flavors. Although most of these works generally achieve satisfying performances on both synthetic and real data, few theoretical analyses and guarantees exist, so that instances of negative transfer are likely to occur.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation