2020
DOI: 10.48550/arxiv.2007.02693
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Auxiliary Learning by Implicit Differentiation

Abstract: Training with multiple auxiliary tasks is a common practice used in deep learning for improving the performance on the main task of interest. Two main challenges arise in this multi-task learning setting: (i) Designing useful auxiliary tasks; and (ii) Combining auxiliary tasks into a single coherent loss. We propose a novel framework, AuxiLearn, that targets both challenges, based on implicit differentiation. First, when useful auxiliaries are known, we propose learning a network that combines all losses into … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…Furthermore, Reinforcement Learning strategies may be concerned with further future research. More precisely, we are currently investigating a Machine Learning field known as Auxiliary Learning [71] after introducing routing effectively to Reinforcement Learning. In this implementation setting, we assume that since the capsule routing helps us model the hierarchical relationships available in these systems, the model we built (A2C with routing) may provide satisfactory results.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, Reinforcement Learning strategies may be concerned with further future research. More precisely, we are currently investigating a Machine Learning field known as Auxiliary Learning [71] after introducing routing effectively to Reinforcement Learning. In this implementation setting, we assume that since the capsule routing helps us model the hierarchical relationships available in these systems, the model we built (A2C with routing) may provide satisfactory results.…”
Section: Discussionmentioning
confidence: 99%
“…Specifically, Shi et al [51] used a similar concept to Lin et al [52], aiming to ensure that the weighted sum of gradients was close to the primary task gradient. Furthermore, Navon et al [54] suggested learning a nonlinear fusion of auxiliary loss. In contrast, Chen et al [55] proposed selecting tasks and individual data samples within each task to maximize auxiliary information utilization.…”
Section: Auxiliary Learningmentioning
confidence: 99%
“…The main methodology of the CoLab strategy is inspired by some recently proposed methods which aim to generate the weights for pre-defined auxiliary tasks or labels through a similar meta-learning framework [23], [27]. In this study, CoLab is specifically designed for semantic segmentation with heterogeneous background classes which is a common scenario in medical imaging.…”
Section: B Multi-task Learningmentioning
confidence: 99%