2014 IEEE Conference on Computer Vision and Pattern Recognition 2014
DOI: 10.1109/cvpr.2014.186
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Multitask Representation Learning for Scene Classification

Abstract: We propose a multitask learning approach to jointly train a low-dimensional representation and the corresponding classifiers which scales to high-dimensional image descriptors, such as the Fisher Vector, and consistently outperforms the current state of the art on the SUN397 scene classification benchmark with varying amounts of training data.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 41 publications
(28 citation statements)
references
References 15 publications
0
28
0
Order By: Relevance
“…MTL has been shown successful in discovering latent relationships among tasks, which cannot be found by learning each task independently. It has been widely applied to machine learning [2,44] and computer vision [45,24]. In addition, MTL is particularly suitable for the situation in which only a limited amount of training data is available for each task.…”
Section: Introductionmentioning
confidence: 99%
“…MTL has been shown successful in discovering latent relationships among tasks, which cannot be found by learning each task independently. It has been widely applied to machine learning [2,44] and computer vision [45,24]. In addition, MTL is particularly suitable for the situation in which only a limited amount of training data is available for each task.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, Maksim et al proposed a novel multi-task learning method to learn a low-dimensional representation jointly with corresponding classifiers. This scalable multi-task representation learning method is suitable for high-dimensional features [36]. Recent works point out that we need to consider whether all the tasks are related and share a common set of features.…”
Section: Related Workmentioning
confidence: 99%
“…We apply GDT algorithm to the problem of multi-task learning, which has been successfully applied in a wide range of application areas, ranging from neuroscience [112], natural language understanding [41], speech recognition [100], computer vision [101], and genetics [128] to remote sensing [123], image classification [74], spam filtering [119], web search [33], disease prediction [135], and eQTL mapping [69]. By transferring information between related tasks it is hoped that samples will be better utilized, leading to improved generalization performance.…”
Section: Gdt For Multi-task Learningmentioning
confidence: 99%