Advances in Character Recognition 2012
DOI: 10.5772/52227
|View full text |Cite
|
Sign up to set email alerts
|

Decision Tree as an Accelerator for Support Vector Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…Latent-lSVM in Do and Poulet (2019) partitions the training data set with latent Dirichlet allocation (Blei et al , 2003). DTSVM (Chang et al , 2010; Chang and Liu, 2012) and t SVM (Do and Poulet, 2017) use the decision tree algorithm (Breiman et al , 1984; Quinlan, 1993) to split the full data set into disjoint regions (tree leaves) and then the algorithm builds the local SVMs for classifying the individuals in tree leaves. These algorithms aim at speeding up the learning time.…”
Section: Discussion On Related Workmentioning
confidence: 99%
“…Latent-lSVM in Do and Poulet (2019) partitions the training data set with latent Dirichlet allocation (Blei et al , 2003). DTSVM (Chang et al , 2010; Chang and Liu, 2012) and t SVM (Do and Poulet, 2017) use the decision tree algorithm (Breiman et al , 1984; Quinlan, 1993) to split the full data set into disjoint regions (tree leaves) and then the algorithm builds the local SVMs for classifying the individuals in tree leaves. These algorithms aim at speeding up the learning time.…”
Section: Discussion On Related Workmentioning
confidence: 99%
“…krSVM (Do & Poulet, 2015) is to learn the random ensemble of kSVM models. DTSVM (Chang, Guo, Lin, & Lu, 2010;Chang & Liu, 2012) and tSVM (Do & Poulet, 2016b) use decision tree algorithms (Breiman, Friedman, Olshen, & Stone, 1984;Quinlan, 1993) to split the full training dataset into t terminal-nodes (tree leaves); follow which the tSVM algorithm builds local SVM models for classifying impurity terminal-nodes (with a mixture of labels) while DTSVM learns local SVM models from all tree leaves. These algorithms are shown to reduce the computational cost for dealing with large datasets while maintaining the prediction correctness.…”
Section: Discussion On Related Workmentioning
confidence: 99%
“…More recent k SVM , k r SVM (random ensemble of k SVM), and t SVM propose to parallely train the local nonlinear SVMs instead of weighting linear ones of CSVM. DTSVM uses the decision tree algorithm to split the full dataset into disjoint regions (tree leaves), and then the algorithm builds the local SVMs for classifying the individuals in tree leaves. These algorithms aim at speeding up the learning time.…”
Section: Discussion On Related Workmentioning
confidence: 99%