2023
DOI: 10.1109/access.2023.3261884
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting All Samples in Low-Resource Sentence Classification: Early Stopping and Initialization Parameters

Abstract: To improve deep-learning performance in low-resource settings, many researchers have redesigned model architectures or applied additional data (e.g., external resources, unlabeled samples). However, there have been relatively few discussions on how to make good use of small amounts of labeled samples, although it is potentially beneficial and should be done before applying additional data or redesigning models. In this study, we assume a low-resource setting in which only a few labeled samples (i.e., 30-100 pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 52 publications
0
1
0
Order By: Relevance
“…Forouzesh and Salehi [40] mention early stopping as one of the regularization techniques applied to avoid overfitting in deep learning architectures. Additionally, Choi and Lee [41] propose a learning strategy that involves training all samples with good initialization parameters and stopping the model using early stopping techniques to prevent overfitting. Tian and Ji [42] also mention the use of early stopping and dropout regularization to combat overfitting in deep learning models.…”
Section: Early Stopping Techniques In Deep Learning Modelsmentioning
confidence: 99%
“…Forouzesh and Salehi [40] mention early stopping as one of the regularization techniques applied to avoid overfitting in deep learning architectures. Additionally, Choi and Lee [41] propose a learning strategy that involves training all samples with good initialization parameters and stopping the model using early stopping techniques to prevent overfitting. Tian and Ji [42] also mention the use of early stopping and dropout regularization to combat overfitting in deep learning models.…”
Section: Early Stopping Techniques In Deep Learning Modelsmentioning
confidence: 99%