2010
DOI: 10.1016/j.knosys.2010.03.012
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised learning based on nearest neighbor rule and cut edges

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 68 publications
(22 citation statements)
references
References 17 publications
0
22
0
Order By: Relevance
“…A further similar approach is Co-Bagging [37], [38] where confidence is estimated from the local accuracy of committee members. Other recent self-labeled approaches are [39], [40], [41], [42], [43].…”
Section: B Self-labeled Techniques: Previous Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A further similar approach is Co-Bagging [37], [38] where confidence is estimated from the local accuracy of committee members. Other recent self-labeled approaches are [39], [40], [41], [42], [43].…”
Section: B Self-labeled Techniques: Previous Workmentioning
confidence: 99%
“…Using the recommendation established in [41], in the division process we do not maintain the class proportion in the labeled and unlabeled sets since the main aim of SSC is to exploit unlabeled data for better classification results. Hence, we use a random selection of examples that will be marked as labeled instances, and the class label of the rest of the instances will be removed.…”
Section: A Data Sets and Parametersmentioning
confidence: 99%
“…But this method can cause several problems like misclassification, much noise in labeled data and also cause error reinforcement. Y.Wang et.al proposed a method to solve this problem and is called self training nearest neighbour rule using cut edges [2] .This method is to pool both testing samples and training sample in an iterative way. There are two aspects .…”
Section: 3self Training Nearest Neighbour Rule Using Cut Edgesmentioning
confidence: 99%
“…Although some techniques, e.g. data editing [5], have been employed to alleviate this noise-related problem [6], results are yet undesirable. As a result, it could always suffer from introducing too much wrongly labeled candidates to the labeled training set, which may severely degrades performance.…”
Section: Introductionmentioning
confidence: 99%