2020
DOI: 10.1016/j.neucom.2019.10.118
|View full text |Cite
|
Sign up to set email alerts
|

A comprehensive survey on support vector machine classification: Applications, challenges and trends

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
570
0
14

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,356 publications
(586 citation statements)
references
References 219 publications
2
570
0
14
Order By: Relevance
“…New examples are classified based on their largest distance to the hyperplane. A detailed explanation of the SVM are found in [38] and [39], while examples with applications for text classification using SVMs are found in [40] and [41].…”
Section: B: Support Vector Machines (Svms)mentioning
confidence: 99%
“…New examples are classified based on their largest distance to the hyperplane. A detailed explanation of the SVM are found in [38] and [39], while examples with applications for text classification using SVMs are found in [40] and [41].…”
Section: B: Support Vector Machines (Svms)mentioning
confidence: 99%
“…Although the Naive Bayesian classification is an effective model for diagnosis melanoma, the decision tree algorithm is not well suited in this domain [ 57 ]. It is important to consider that SVM is not so popular with large data sets as it requires a significant amount of training time; however, in our research, this was not the case [ 56 ]. In comparison to the latest research work on SVM classification for melanoma (85.19% [ 58 ], 92.1% [ 59 ], 96% [ 60 ], 90% [ 61 ], 97.32% [ 62 ]), we achieved 98.9% accuracy with the proposed technique.…”
Section: Resultsmentioning
confidence: 99%
“…SVM is proven to be the optimal for linearly separable cases and its strategy to determine maximum-margin hyperplane is one of the best to reduce the prediction error [54]. In general, SVM is better for a two-class classification problem with a smaller number of features [55,56]. However, Naive Bayes can handle more features easily.…”
Section: Case 3: Combination Of Spectrophotometry and Hfus Imaging Tementioning
confidence: 99%
“…These data can be plotted on a 2D coordinate system (i.e., PCA score plot). With known classes (e.g., stressed vs. healthy) of the data, it is possible to draw a line that can best separate all of the data into two classes; this line is called a decision boundary (demonstrated in Figure 12 [127]). The procedure can also be used for three or more dimensions of data, where the boundary becomes a plane for three dimensions or a hyperplane for dimensions higher than three.…”
Section: Support Vector Machine (Svm)mentioning
confidence: 99%
“…For dimensions higher than three, the decision boundary is a hyperplane. Reprinted with permission from [127]. ©2020 Elsevier.…”
Section: Support Vector Machine (Svm)mentioning
confidence: 99%