2022
DOI: 10.3390/math10203776
|View full text |Cite
|
Sign up to set email alerts
|

On Subsampling Procedures for Support Vector Machines

Abstract: Herein, theoretical results are presented to provide insights into the effectiveness of subsampling methods in reducing the amount of instances required in the training stage when applying support vector machines (SVMs) for classification in big data scenarios. Our main theorem states that under some conditions, there exists, with high probability, a feasible solution to the SVM problem for a randomly chosen training subsample, with the corresponding classifier as close as desired (in terms of classification e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 38 publications
0
3
0
Order By: Relevance
“…A different approach is presented in [14] where the reduction of the data includes the assignments of large weights to important samples and the reduction of the features, by using graph and self-paced learning. The methods in [6] and [5] use nearest neighbors with sub-sampling in order to select a subset of significant instances. They start by training an SVM formed by a very small sub-samples of the data set.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A different approach is presented in [14] where the reduction of the data includes the assignments of large weights to important samples and the reduction of the features, by using graph and self-paced learning. The methods in [6] and [5] use nearest neighbors with sub-sampling in order to select a subset of significant instances. They start by training an SVM formed by a very small sub-samples of the data set.…”
Section: Related Workmentioning
confidence: 99%
“…This process is repeated until the classification error becomes almost constant. In [5] the authors also provide a theoretical framework for the analysis of random sub-sampling when using SVM.…”
Section: Related Workmentioning
confidence: 99%
“…Support Vector Machine is a widely used technique that applied in many real-world applications (Bárcenas et al 2022). The theoretical basis for classi cation method of SVM is similar to that for Support Vector Regression (SVR), with slight changes.…”
Section: Support Vector Machinementioning
confidence: 99%