2018
DOI: 10.1109/tnnls.2017.2766704
|View full text |Cite
|
Sign up to set email alerts
|

Lagrangean-Based Combinatorial Optimization for Large-Scale S3VMs

Abstract: The process of manually labeling instances, essential to a supervised classifier, can be expensive and time-consuming. In such a scenario the semisupervised approach, which makes the use of unlabeled patterns when building the decision function, is a more appealing choice. Indeed, large amounts of unlabeled samples often can be easily obtained. Many optimization techniques have been developed in the last decade to include the unlabeled patterns in the support vector machines formulation. Two broad strategies a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…Receiver operating characteristic (ROC) analysis at the slide level will be performed, and the measure used for comparing the algorithms will be the area under the ROC curve (AUC) [37]. On the independent test cases, the Lagrangian-based S 3 VM [38,39] model achieved an AUC of 0.9920, as shown in Fig 7 . Our score is very close to the score (0.9935) of the top-ranked team on the leaderboard of the Camelyon16 ISBI challenge [37].…”
Section: Two Evaluation In Camelyon16mentioning
confidence: 99%
“…Receiver operating characteristic (ROC) analysis at the slide level will be performed, and the measure used for comparing the algorithms will be the area under the ROC curve (AUC) [37]. On the independent test cases, the Lagrangian-based S 3 VM [38,39] model achieved an AUC of 0.9920, as shown in Fig 7 . Our score is very close to the score (0.9935) of the top-ranked team on the leaderboard of the Camelyon16 ISBI challenge [37].…”
Section: Two Evaluation In Camelyon16mentioning
confidence: 99%
“…As is known, samples from the lowdensity region are more valuable for constructing the classifier, since the ideal classification hyperplane often passes through the sparse data region with the lowest density in the feature space. [17] Local reachable density (LRD) [18,19] and local outlier factor (LOF) [20,21] can effectively evaluate the density of samples. LRD represents the density of samples based on the K-distance neighbourhood, [22] and LOF represents the average ratios of the LRD of the sample neighbourhood points to the LRD of the sample.…”
Section: Introductionmentioning
confidence: 99%