2006
DOI: 10.1016/j.neucom.2005.12.118
|View full text |Cite
|
Sign up to set email alerts
|

Support vector machine interpretation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2008
2008
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(20 citation statements)
references
References 4 publications
0
20
0
Order By: Relevance
“…For instance, PCA used for logistic regression in E 2 produces components that are delicate to explain causally. Similarly, SVM and NN often lack explanative power, i.e., it is difficult for end users to extract concrete rules as their internal mechanisms are complex to interpret, and the insights into their learning process and decisions are limited [28,34]. In contrast, decision tree learners produce rules that are easier to extract [18].…”
Section: Discussionmentioning
confidence: 99%
“…For instance, PCA used for logistic regression in E 2 produces components that are delicate to explain causally. Similarly, SVM and NN often lack explanative power, i.e., it is difficult for end users to extract concrete rules as their internal mechanisms are complex to interpret, and the insights into their learning process and decisions are limited [28,34]. In contrast, decision tree learners produce rules that are easier to extract [18].…”
Section: Discussionmentioning
confidence: 99%
“…Fish type identification based on fish eye images and useful parameters extracted from these images is the methodology adopted in the designed system. Support vector machine (SVM) has proven to be an effective tool for pattern recognition [1]. The typical method for applying SVM to multiclass problems (N classes) is to construct N number of binary-SVM classifiers, each of which identifies one class among N different classes.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Cho et al [17] described an approach using a specialized kernel function and a nomogram [4], though this was more a visualization than an interpretation. Navia-Vázquez and Parrado-Hernández [74] described an approach to interpreting SVM classification models based on segmenting the input space using the prototypes extracted from the trained model. In the area of QSAR modeling, Usdun et al [106] described an approach to visualizing and interpreting support vector regression (SVR) models.…”
Section: Interpretation Methodologiesmentioning
confidence: 99%