Proceedings of the 24th ACM International Conference on Multimedia 2016
DOI: 10.1145/2964284.2964309
|View full text |Cite
|
Sign up to set email alerts
|

Deep Cross Residual Learning for Multitask Visual Recognition

Abstract: Residual learning has recently surfaced as an effective means of constructing very deep neural networks for object recognition. However, current incarnations of residual networks do not allow for the modeling and integration of complex relations between closely coupled recognition tasks or across domains. Such problems are often encountered in multimedia applications involving large-scale content recognition. We propose a novel extension of residual learning for deep networks that enables intuitive learning ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
56
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 64 publications
(57 citation statements)
references
References 35 publications
1
56
0
Order By: Relevance
“…Although this problem is rarely identified in the literature, many of the existing methods are in fact designed to mitigate destructive interference in multi-task learning. For example, in the popular multi-branch neural network architecture and its variants, the task-specific branches are designed carefully with the prior knowledge regarding the relationships of certain tasks [18,8,20]. By doing this, people expect less conflicting training signals to the shared parameters.…”
Section: Smilementioning
confidence: 99%
“…Although this problem is rarely identified in the literature, many of the existing methods are in fact designed to mitigate destructive interference in multi-task learning. For example, in the popular multi-branch neural network architecture and its variants, the task-specific branches are designed carefully with the prior knowledge regarding the relationships of certain tasks [18,8,20]. By doing this, people expect less conflicting training signals to the shared parameters.…”
Section: Smilementioning
confidence: 99%
“…Additionally, with MTL comes a natural urge to simplify the models at hand and group the tasks that would benefit each other's learning process. With this mutually beneficial task relationship in mind, there are numerous domains and modalities [10,14,15,24,37,39,47,51] where the MTL methodology can be applied. As such, MTL is often used implicitly without a specific reference in methods such as transfer learning and fine-tuning [4,40] as well.…”
Section: Related Workmentioning
confidence: 99%
“…In our work we address the feature robustness issue by randomizing the sharing structure from the start of the training process and enforcing tasks to use alternate routes for their data-flow through the model. Both symmetric and asymmetric MTL approaches often rely on prior knowledge to help with architecture design, sharing options and task grouping [23,32,10,1]. If such knowledge is present it is a helpful resource for designing an MTL model.…”
Section: Related Workmentioning
confidence: 99%