Proceedings of the 7th Latin American Networking Conference 2012
DOI: 10.1145/2382016.2382019
|View full text |Cite
|
Sign up to set email alerts
|

Statistical traffic classification by boosting support vector machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…Adaboost is a boosting algorithm that works on the weights of incorrectly classified instances; the classifier tries to adjust the weights resulting from the classifier n such that subsequent classifiers will focus on more difficult cases. These techniques have been used in many applications, such as traffic classification [ 39 ]. The estimator used in our practical case study is SVM, the regularization factor was set to and the kernel used was radial basis functions (RBF) .…”
Section: Proposed Approach Materials and Methodsmentioning
confidence: 99%
“…Adaboost is a boosting algorithm that works on the weights of incorrectly classified instances; the classifier tries to adjust the weights resulting from the classifier n such that subsequent classifiers will focus on more difficult cases. These techniques have been used in many applications, such as traffic classification [ 39 ]. The estimator used in our practical case study is SVM, the regularization factor was set to and the kernel used was radial basis functions (RBF) .…”
Section: Proposed Approach Materials and Methodsmentioning
confidence: 99%
“…Among them, α i is the Lagrange multiplier, and the corresponding training sample point is the support vector when α i is not 0. K is a kernel function to solve nonlinear problems . C is a penalty coefficient, which is used to control the error during the training.…”
Section: Proposed Methods Of Spp‐svmmentioning
confidence: 99%
“…K is a kernel function to solve nonlinear problems. 32,33 C is a penalty coefficient, which is used to control the error during the training.…”
Section: Two-class Svmmentioning
confidence: 99%
“…The filter-wrapper feature selection method selects the key feature combinations accurately and avoids the false deletion of combined features, which achieves the effect of reducing the dimension and shortening the training time. The parameter optimization is used in reference [21,33], and it is the original grid search method which brings a higher risk of overfitting. The back-propagation algorithm in [17] is a kind of local search algorithm, which may fall into a local optimum solution.…”
Section: Comparison Of Igs and Other Related Approachesmentioning
confidence: 99%
“…IGS_filter-wrapper_SVM Filter-Wrapper IGS more than 99.34% Ensemble learning [15] Identification Engineer no more than 99% Ensemble learning [16] Burst Threshold no more than 80% Deep learning [17] No Back-Propagation more than 80% SVM [21] Sequential forward Grid-Search 97.17% SVM [33] No Grid-Search more than 95%…”
Section: Feature Selection Parameters Optimization Classification Accmentioning
confidence: 99%