2015
DOI: 10.1016/j.asoc.2015.08.048
|View full text |Cite
|
Sign up to set email alerts
|

Data selection based on decision tree for SVM classification on large data sets

Abstract: Please cite this article in press as: J. Cervantes, et al., Data selection based on decision tree for SVM classification on large data sets, Appl. Soft Comput. J. (2015), http://dx.a b s t r a c t Support Vector Machine (SVM) has important properties such as a strong mathematical background and a better generalization capability with respect to other classification methods. On the other hand, the major drawback of SVM occurs in its training phase, which is computationally expensive and highly dependent on the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0
3

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 69 publications
(28 citation statements)
references
References 43 publications
0
25
0
3
Order By: Relevance
“…The authors showed that their algorithm is competitive to four state-of-the-art techniques and is applicable to other classifiers. In a recent paper, Cervantes et al (2015) incorporated an induction tree to reduce the size of SVM training sets. The main idea behind the proposed technique is to train SVMs using significantly smaller refined training sets, and then to label vectors from T as those which are close or far from the decision hyperplane.…”
Section: Neighborhood Analysis Methodsmentioning
confidence: 99%
“…The authors showed that their algorithm is competitive to four state-of-the-art techniques and is applicable to other classifiers. In a recent paper, Cervantes et al (2015) incorporated an induction tree to reduce the size of SVM training sets. The main idea behind the proposed technique is to train SVMs using significantly smaller refined training sets, and then to label vectors from T as those which are close or far from the decision hyperplane.…”
Section: Neighborhood Analysis Methodsmentioning
confidence: 99%
“…The hyperplanes themselves are vectors with linear properties. As the dot product computation in larger dimensional space could be complex, the kernel function is defined to address this issue [59,60].…”
Section: Support Vector Machinesmentioning
confidence: 99%
“…Both the SVM and RF algorithms are widely used for classification. SVM uses different types of points to map to high-dimensional space to construct hyperplane separations, mainly to remedy low-dimensional linear inseparability issues and achieve good performance with feature selection [42][43][44]. RF is a classical ensemble algorithm that constructs a random tree by sampling data and features.…”
Section: Machine Learning Algorithmsmentioning
confidence: 99%