2018
DOI: 10.1007/978-3-030-01270-0_17
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Task Prioritization for Multitask Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
176
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 267 publications
(176 citation statements)
references
References 50 publications
0
176
0
Order By: Relevance
“…(ii) The ranges of the loss functions from different tasks can be different, which hampers the consistent and balanced optimization of the tasks. (iii) The difficulties of the tasks can be different, which affects the pace with which the tasks are learned, and hence hinders the training process [121]. Figure 14 illustrates a case where the loss of the classification dominates the overall gradient.…”
Section: Imbalance 4: Objective Imbalancementioning
confidence: 99%
See 1 more Smart Citation
“…(ii) The ranges of the loss functions from different tasks can be different, which hampers the consistent and balanced optimization of the tasks. (iii) The difficulties of the tasks can be different, which affects the pace with which the tasks are learned, and hence hinders the training process [121]. Figure 14 illustrates a case where the loss of the classification dominates the overall gradient.…”
Section: Imbalance 4: Objective Imbalancementioning
confidence: 99%
“…The hyperparameters in the proposed regularizer control the hardness of not only the instances but also the tasks, and accordingly, the hardness level is increased during training. In another work motivated by the self-paced learning approach, Guo et al [121] use more diverse set of tasks, including object detection. Their method weighs the losses dynamically based on the exponential moving average of a predefined key performance indicator (e.g.…”
Section: Multi-task Learningmentioning
confidence: 99%
“…(Liu et al, 2018) adds a moving average of task weights obtained by method similar to GradNorm. (Guo et al, 2018) on other hand proposes dynamic weight adjustments based on task difficulty. As the difficulty of learning changes over training time, the task weights are updated allowing the model to prioritize difficult tasks.…”
Section: Multi-task Lossmentioning
confidence: 99%
“…Multi-task Learning: Multi-task learning tries to learn many tasks simultaneously to obtain more general models or multiple outputs in a single run [24,6,14]. Some recent works have addressed the autonomous driving scenario [3,30] so to learn jointly related tasks like depth estimation and semantic segmentation in order to improve performance.…”
Section: Related Workmentioning
confidence: 99%