2020 International SAUPEC/RobMech/PRASA Conference 2020
DOI: 10.1109/saupec/robmech/prasa48453.2020.9041053
|View full text |Cite
|
Sign up to set email alerts
|

Inter- and Intra-domain Knowledge Transfer for Related Tasks in Deep Character Recognition

Abstract: Pre-training a deep neural network on the ImageNet dataset is a common practice for training deep learning models, and generally yields improved performance and faster training times. The technique of pre-training on one task and then retraining on a new one is called transfer learning. In this paper we analyse the effectiveness of using deep transfer learning for character recognition tasks. We perform three sets of experiments with varying levels of similarity between source and target tasks to investigate t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 12 publications
0
1
0
Order By: Relevance
“…One reason for the success of deep learning models is their ability to transfer previous learning to new tasks. In image classification, this transfer leads to more robust models and faster training [8][9][10][11][12][13]. Despite the importance of transfer in deep learning, there has been little insight into the nature of transferring relational knowledge-that is, the representations learnt by graph neural networks.…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…One reason for the success of deep learning models is their ability to transfer previous learning to new tasks. In image classification, this transfer leads to more robust models and faster training [8][9][10][11][12][13]. Despite the importance of transfer in deep learning, there has been little insight into the nature of transferring relational knowledge-that is, the representations learnt by graph neural networks.…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…Transfer learning [13,14,15,16,17] allows for knowledge derived from tasks rich in data to be applied to tasks, languages, or domains where data is limited. It consist of two steps, pretraining on one task or domain (source) and domain adaptation where the learned representations are used in a different task, domain or language (target).…”
Section: Introductionmentioning
confidence: 99%