2016
DOI: 10.3906/elk-1307-137
|View full text |Cite
|
Sign up to set email alerts
|

Heuristic sample reduction method for support vector data description

Abstract: Support vector data description (SVDD) has become one of the most promising methods for one-class classification for finding the boundary of the training set. However, SVDD has a time complexity of O (N 3) and a space complexity of O (N 2). When dealing with very large sizes of training sets, e.g., a training set of the aeroengine gas path parameters with the size of N > 10 6 sampled from several months of flight data, SVDD fails. To solve this problem, a method called heuristic sample reduction (HSR) is propo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…b) Pruning: The idea of pruning is to iteratively remove observations from high-density regions as long as the sample remains "density-connected". One way to achieve this is by pruning all neighbors of an observation closer than a minimum distance, starting from the observation closest to the cluster mean [13]. Yet this approach requires to set the minimum distance threshold, and a good choice is data dependent.…”
Section: B Sampling Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…b) Pruning: The idea of pruning is to iteratively remove observations from high-density regions as long as the sample remains "density-connected". One way to achieve this is by pruning all neighbors of an observation closer than a minimum distance, starting from the observation closest to the cluster mean [13]. Yet this approach requires to set the minimum distance threshold, and a good choice is data dependent.…”
Section: B Sampling Methodsmentioning
confidence: 99%
“…Reduction Sampling [9][10][11][12][13][14][15][16][17][18] Kernel Matrix [24][25][26][27][28][29] Optimization [5][6][7][8] Fast Inference [2,[30][31][32][33] post-processing pre-processing efficient if the number of support vectors is low. Also note that under mild assumptions, SVDD is equivalent to ν-SVM [20].…”
Section: Fast Trainingmentioning
confidence: 99%
See 2 more Smart Citations