2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015
DOI: 10.1109/cvpr.2015.7299188
|View full text |Cite
|
Sign up to set email alerts
|

Curriculum learning of multiple tasks

Abstract: Sharing information between multiple tasks enables algorithms to achieve good generalization performance even from small amounts of training data. However, in a realistic scenario of multi-task learning not all tasks are equally related to each other, hence it could be advantageous to transfer information only between the most related tasks.In this work we propose an approach that processes multiple tasks in a sequence with sharing between subsequent tasks instead of solving all tasks jointly. Subsequently, we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
141
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 203 publications
(141 citation statements)
references
References 35 publications
0
141
0
Order By: Relevance
“…5 Similarly to RankSVM, MTL 1 can also construct non-linear predictors using Gaussian kernels (with hyperparameter σ 2 S ). D) MTL 2 adapts Pentina et al's curriculum learning approach [19], which penalizes the deviation of the main predictor parameter w from a single best reference predictor w k . Pentina et al's original algorithm uses a bound on the generalization accuracy to select the reference predictor, which is not directly applicable to our rank learning problem.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…5 Similarly to RankSVM, MTL 1 can also construct non-linear predictors using Gaussian kernels (with hyperparameter σ 2 S ). D) MTL 2 adapts Pentina et al's curriculum learning approach [19], which penalizes the deviation of the main predictor parameter w from a single best reference predictor w k . Pentina et al's original algorithm uses a bound on the generalization accuracy to select the reference predictor, which is not directly applicable to our rank learning problem.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…In the second experiment, we focus on differentiating between 'easy' and 'hard' images of animal classes. We use a subset of Animals with Attributes dataset (AwA) 5 [13] for which the annotation of easy-hard scores is available 6 [22]. For each class the annotation specifies ranking scores of its images from easiest to hardest.…”
Section: Methodsmentioning
confidence: 99%
“…With MTurk, now it becomes possible to collect annotations for large datasets such as ImageNet [26], TinyImages [31], COCO [14], and Places [38]. Moreover, it becomes prevalent to collect task-specific datasets, for example for studying the attributes and their strength [20] and for determining the easiness or hardness of a particular classification task [22]. Those task-specific datasets often require annotations that are more ambiguous than typical object annotations 'present' or 'not present'.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Some researchers have combined the self-paced learning with the multi-task learning. Pentina [20] proposed a curriculum learning of multiple tasks to solve multiple tasks in a sequential manner. The limitation of this model was that it currently allowed to transfer only from the previous task to solve the current one.…”
Section: Introductionmentioning
confidence: 99%