2023
DOI: 10.1109/tcyb.2022.3223377
|View full text |Cite
|
Sign up to set email alerts
|

A Model-Agnostic Approach to Mitigate Gradient Interference for Multi-Task Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 44 publications
0
2
0
Order By: Relevance
“…2) We will focus on some entities with plenty of links but have a low place in the hierarchy as mentioned in the last subsection and try to analyse them from the perspective of various relation types. Also, extending our model to more complex Riemannian spaces including spherical space and complex space integrating multi-task learning [50] applications will be a practical idea for many real scenarios.…”
Section: Discussionmentioning
confidence: 99%
“…2) We will focus on some entities with plenty of links but have a low place in the hierarchy as mentioned in the last subsection and try to analyse them from the perspective of various relation types. Also, extending our model to more complex Riemannian spaces including spherical space and complex space integrating multi-task learning [50] applications will be a practical idea for many real scenarios.…”
Section: Discussionmentioning
confidence: 99%
“…Motivated by the success of multi-task learning in various tasks [63][64][65][66], we derive the overall training loss of MMCL as below:…”
Section: Self-distillation Layermentioning
confidence: 99%