2021
DOI: 10.48550/arxiv.2104.02144
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Concise Review of Transfer Learning

Abstract: The availability of abundant labeled data in recent years led the researchers to introduce a methodology called transfer learning, which utilizes existing data in situations where there are difficulties in collecting new annotated data. Transfer learning aims to boost the performance of a target learner by applying another related source data. In contrast to the traditional machine learning and data mining techniques, which assume that the training and testing data lie from the same feature space and distribut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…For this purpose, a base model is first trained on labeled data, and this knowledge is then used as base initialization knowledge for new classes. This is done by replacing and re-training the final classification layers trough fine-tuning [Far+21]. In order to remain appropriate to the real condition in the production, only a small portion of marked data is initially used.…”
Section: Related Workmentioning
confidence: 99%
“…For this purpose, a base model is first trained on labeled data, and this knowledge is then used as base initialization knowledge for new classes. This is done by replacing and re-training the final classification layers trough fine-tuning [Far+21]. In order to remain appropriate to the real condition in the production, only a small portion of marked data is initially used.…”
Section: Related Workmentioning
confidence: 99%
“…Transfer learning consists of training a model on a specific source task and then using the learned weights to start the fine-tuning process on a second task of interest, which is commonly called a downstream task. It is an effective learning technique to reduce the amount of data and training requirements needed to achieve high performance on multiple tasks using neural networks (FARAHANI et al, 2021). This is especially useful when we do not have enough labeled data to fine-tune a good model, because transfer learning allows to get around this problem and develop skillful models that would be unfeasible in the absence of transfer learning.…”
Section: Transfer Learningmentioning
confidence: 99%