2019
DOI: 10.1007/s11590-019-01482-1
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection in SVM via polyhedral k-norm

Abstract: We treat the Feature Selection problem in the Support Vector Machine (SVM) framework by adopting an optimization model based on use of the 0 pseudo-norm. The objective is to control the number of non-zero components of normal vector to the separating hyperplane, while maintaining satisfactory classification accuracy. In our model the polyhedral norm. [k] , intermediate between. 1 and. ∞, plays a significant role, allowing us to come out with a DC (Difference of Convex) optimization problem that is tackled by m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
32
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 24 publications
(32 citation statements)
references
References 31 publications
0
32
0
Order By: Relevance
“…Let us recall that ρ d,k is introduced in [9] as an intersection of half-spaces of R d . We recover this description as a consequence of Theorem 2.1.…”
Section: The Geometry Of ρ Dkmentioning
confidence: 99%
See 3 more Smart Citations
“…Let us recall that ρ d,k is introduced in [9] as an intersection of half-spaces of R d . We recover this description as a consequence of Theorem 2.1.…”
Section: The Geometry Of ρ Dkmentioning
confidence: 99%
“…In particular, it is shown in [18] that these norms are a solution to an optimization problem regarding the conditional value at risk. The same norms also appear in optimization problems over sets of matrices [21], where they are called vector k-norms, and in sparse optimization [9,10].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Support vector machines (SVM) were first introduced by Vapnik and Cortes [6] and have been widely applied into many fields, including text and image classification [12,25], disease detection [8,16], etc. The decision hyperplane of SVM classifier, w, x + b = 0 with w ∈ R n and b ∈ R, is trained from data set {(x i , y i ), i ∈ N m } where x i ∈ R n , y i ∈ {−1, 1} and N m := {1, 2, • • • , m} by optimizing the following problem min w∈R n ,b∈R…”
Section: Introductionmentioning
confidence: 99%