2015
DOI: 10.1109/tcyb.2014.2332003
|View full text |Cite
|
Sign up to set email alerts
|

SEG-SSC: A Framework Based on Synthetic Examples Generation for Self-Labeled Semi-Supervised Classification

Abstract: A note on versions:The version presented here may differ from the published version or from the version of record. If you wish to cite this item you are advised to consult the publisher's version. Please see the repository url above for details on accessing the published version and note that access may require a subscription. Abstract-Self-labeled techniques are semi-supervised classification methods that address the shortage of labeled examples via a self-learning process based on supervised models. They pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
8
2

Relationship

2
8

Authors

Journals

citations
Cited by 64 publications
(18 citation statements)
references
References 57 publications
0
18
0
Order By: Relevance
“…While this is the most popular area, it is at the same time not the only one where skewed distributions may affect the learning process. This phenomenon appears often in semi-supervised [56], active [68] and unsupervised learning [41], especially in clustering. Despite numerous solutions dedicated to this problem, most of them display reduced effectiveness when true underlying groups of data have highly varying sizes.…”
Section: Semi-supervised and Unsupervised Learning From Imbalanced Datamentioning
confidence: 97%
“…While this is the most popular area, it is at the same time not the only one where skewed distributions may affect the learning process. This phenomenon appears often in semi-supervised [56], active [68] and unsupervised learning [41], especially in clustering. Despite numerous solutions dedicated to this problem, most of them display reduced effectiveness when true underlying groups of data have highly varying sizes.…”
Section: Semi-supervised and Unsupervised Learning From Imbalanced Datamentioning
confidence: 97%
“…Given that the proposed 'dynamic balancing' technique (adding positive instances only) gave best results overall in our study, a comparison, and possibly a hybridised method, between this approach and other techniques (e.g., the approach in Triguero et al, 2015) is of interest. Utilising other algorithms, such as co-and multi-training, which make use of multiple independent views of the data, could potentially increase the classification ability.…”
Section: Discussionmentioning
confidence: 97%
“…Due to the fact of generating and storing more and more data, the labeling of examples cannot be done for all and the predictive or descriptive task will be supported by a subset of labelled examples [104]. Data preprocessing, especially at the instance level [105], would be useful to improve the quality of this kind of data.…”
Section: New Big Data Learning Paradigmsmentioning
confidence: 99%