2023
DOI: 10.1016/j.neunet.2022.12.012
|View full text |Cite
|
Sign up to set email alerts
|

Improving fine-tuning of self-supervised models with Contrastive Initialization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(1 citation statement)
references
References 55 publications
0
1
0
Order By: Relevance
“…Data augmentation, also known as Data enhancement, is a technique for increasing the quantity and diversity of limited data. Unlike other common methods to prevent overfitting, such as pretraining [14], dropout [15], batch normalization [16], and transfer learning [17], data augmentation methods start from the root cause of the overfitting problem, i.e., insufficient training samples, and prevent overfitting by simulating the possible changes in real data through mathematical simulation. As shown in Figure 1, it can extract more generalized information and features from small data sets, increase the size of the data set used to train machine learning models, and finally enhance the precision and generalizability of those models [18].…”
Section: Introductionmentioning
confidence: 99%
“…Data augmentation, also known as Data enhancement, is a technique for increasing the quantity and diversity of limited data. Unlike other common methods to prevent overfitting, such as pretraining [14], dropout [15], batch normalization [16], and transfer learning [17], data augmentation methods start from the root cause of the overfitting problem, i.e., insufficient training samples, and prevent overfitting by simulating the possible changes in real data through mathematical simulation. As shown in Figure 1, it can extract more generalized information and features from small data sets, increase the size of the data set used to train machine learning models, and finally enhance the precision and generalizability of those models [18].…”
Section: Introductionmentioning
confidence: 99%