2013
DOI: 10.1016/j.neucom.2012.08.020
|View full text |Cite
|
Sign up to set email alerts
|

Using clustering analysis to improve semi-supervised classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
40
0
2

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 120 publications
(42 citation statements)
references
References 16 publications
0
40
0
2
Order By: Relevance
“…Recently, there are attempts to replace the lengthy objective function optimization process of semi-supervised SVMs by cluster analysis [6] [8] . The idea is to first find high density regions (clusters) in feature space through clustering methods and then clusters are passed to standard supervised SVM to find a separating decision boundary that passes through low density regions.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, there are attempts to replace the lengthy objective function optimization process of semi-supervised SVMs by cluster analysis [6] [8] . The idea is to first find high density regions (clusters) in feature space through clustering methods and then clusters are passed to standard supervised SVM to find a separating decision boundary that passes through low density regions.…”
Section: Related Workmentioning
confidence: 99%
“…Although the labels for training and testing patterns are usually available in BCI datasets, they are not always accurate and thus clustering would be useful to model the intrinsic pattern structure before training the classifier. Moreover, it is possible to follow the same strategy as in semi-supervised clustering algorithms, whose key idea is to take advantage of some prior knowledge to improve the performance of clustering [ 16,17]. In [18], a consistency-based criterion is applied to select relevant features: in a set of consistent features, no two patterns with the same values in all the features could have different class labels.…”
Section: A Label-aided Filter Approach For Evolutionary Multi-objectimentioning
confidence: 99%
“…To take advantage of both labeled and unlabeled data, several researches have designed ways of combining classifiers and clusterers [22], [37], [38], [39], [40], [41], [21], [42], [43]. Acharya et al [21] and Gao et al [43], in particular, deal with the combination of a handful of classifiers and clusterers with the ultimate goal of classifying new data.…”
Section: Related Workmentioning
confidence: 99%