2021
DOI: 10.3390/e23111473
|View full text |Cite
|
Sign up to set email alerts
|

Minimum Distribution Support Vector Clustering

Abstract: Support vector clustering (SVC) is a boundary-based algorithm, which has several advantages over other clustering methods, including identifying clusters of arbitrary shapes and numbers. Leveraged by the high generalization ability of the large margin distribution machine (LDM) and the optimal margin distribution clustering (ODMC), we propose a new clustering method: minimum distribution for support vector clustering (MDSVC), for improving the robustness of boundary point recognition, which characterizes the o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…A dual problem solver is the core of the training phase in which complex operations, huge iterations, and unaffordable memory by the pre-computed kernel matrix are critical and relevant tasks. To improve the robustness, [9] characterized the optimal sphere by introducing the first-order and second-order statistics. For complexity reduction, [4] rewrote the dual problem by introducing the Jaynes maximum entropy, while [35] started the solver with a position-based weight.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A dual problem solver is the core of the training phase in which complex operations, huge iterations, and unaffordable memory by the pre-computed kernel matrix are critical and relevant tasks. To improve the robustness, [9] characterized the optimal sphere by introducing the first-order and second-order statistics. For complexity reduction, [4] rewrote the dual problem by introducing the Jaynes maximum entropy, while [35] started the solver with a position-based weight.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, parameter selection stops model training at the right time for cluster discovery. In the literature [1,9], the expected cluster number k is still the most common stop condition before obtaining the appropriate kernel width q by an incremental test. However, k is what we need for discovery in practical situations [8,10].…”
Section: Introductionmentioning
confidence: 99%