2020
DOI: 10.1109/access.2019.2962258
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Learning for Fine-Grained Classification With Self-Training

Abstract: Semi-supervised learning is a machine learning approach that tackles the challenge of having a large set of unlabeled data and few labeled ones. In this paper we adopt a semi-supervised self-training method to increase the amount of training data, prevent overfitting and improve the performance of deep models by proposing a novel selection algorithm that prevents mistake reinforcement which is a common thing in conventional self-training models. The model leverages, unlabeled data and specifically, after each … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 34 publications
(16 citation statements)
references
References 33 publications
0
16
0
Order By: Relevance
“…This process continues until some pre-defined termination condition is reached. Self-training has previously been applied to several problems ranging from fine-grained classification [86] to parsing [87]. However, its capability has not been explored in the context of MOOCs analytics.…”
Section: Self-trainingmentioning
confidence: 99%
“…This process continues until some pre-defined termination condition is reached. Self-training has previously been applied to several problems ranging from fine-grained classification [86] to parsing [87]. However, its capability has not been explored in the context of MOOCs analytics.…”
Section: Self-trainingmentioning
confidence: 99%
“…It can be observed that the formulation in Equation 7was similar to the work proposed in [44]. However, the difference between the formulation in Equation 7and the optimization flow in [44] was that we introduced a class-wise bias by normalizing class-wise confidence levels compared to the use of an L 1 regularizer to prevent most pseudo-labels from being ignored by serving as a negative sparse term. The authors in [44] solved the pseudo-label framework optimizer by utilizing the solver in Equation (8)…”
Section: Self-training With Self-paced Curriculum Learningmentioning
confidence: 99%
“…From Equation (10), the pseudo-label generation and selection were not dependent on the prediction confidence level output as given in the solver by [44]. Instead, this was dependent on the class-wise normalized output p n (c|w,I u ) exp(−k c ) .…”
Section: Self-training With Self-paced Curriculum Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, compared to the process of obtaining well-labeled data, unlabeled data is rather inexpensive and abundant. Semisupervised learning algorithms have been adopted in some works mentioned in the literature for some classification tasks [ 27 , 29 34 ].…”
Section: Introductionmentioning
confidence: 99%