2016
DOI: 10.1007/s10994-015-5542-8
|View full text |Cite
|
Sign up to set email alerts
|

Large margin classification with indefinite similarities

Abstract: Classification with indefinite similarities has attracted attention in the machine learning community. This is partly due to the fact that many similarity functions that arise in practice are not symmetric positive semidefinite, i.e. the Mercer condition is not satisfied, or the Mercer condition is difficult to verify. Examples of such indefinite similarities in machine learning applications are ample including, for instance, the BLAST similarity score between protein sequences, human-judged similarities betwe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 26 publications
0
4
0
Order By: Relevance
“…However, while our method beats others most of the time and consistently, it also slightly fails such in few cases proving the no free lunch theorem that is, no method performs better 100% times. Moreover, [20] discussed about SVM with indefinite kernels where it is showed that similarity function used for L 1norm or LP SVM does not need to be positive semi/definite and we use LP SVM which remains convex even if the similarity matrix is indefinite. Although in some cases compared to the sparse machines, our similarity based SVM in- As outlier patterns stay outside the decision boundary of their own classes, for them, generally, similarities to the patterns of their opposite classes are higher compared to the similarities to the patterns of their own classes; hence, ratio of the sum of similarities to opposite class with respect to own class should be higher for a good similarity function.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, while our method beats others most of the time and consistently, it also slightly fails such in few cases proving the no free lunch theorem that is, no method performs better 100% times. Moreover, [20] discussed about SVM with indefinite kernels where it is showed that similarity function used for L 1norm or LP SVM does not need to be positive semi/definite and we use LP SVM which remains convex even if the similarity matrix is indefinite. Although in some cases compared to the sparse machines, our similarity based SVM in- As outlier patterns stay outside the decision boundary of their own classes, for them, generally, similarities to the patterns of their opposite classes are higher compared to the similarities to the patterns of their own classes; hence, ratio of the sum of similarities to opposite class with respect to own class should be higher for a good similarity function.…”
Section: Discussionmentioning
confidence: 99%
“…Learning with indefinite kernel or non-PSD similarity matrix has attracted huge concentration [11][12][13][14][15][16][17][18][19]. However, [20] have divided recent work on training SVM with indefinite kernels into three main kinds: PSD kernel approximation, non-convex optimization, and learning in Krein spaces with a conclusion that all methods are not fully adequate as they have either hosted bases of inconsistency in handling training and test patterns using kernel approximation which harms generalization guarantees or established for approximate local minimum solutions by non-convex optimization, or generated nonsparse solutions. But there is another approach that has been studied in a sequence of papers [1], [21], [13], [22] that adopt a certain "goodness" property, which is formally defined for the similarity function and provide both generalization guarantees in terms of how well-suited the similarity function is to the classification task at hand as well as the capability to use fast algorithmic techniques.…”
Section: Introductionmentioning
confidence: 99%
“…There are two main directions to handle the problem of indefiniteness: using insensitive methods like indefinite kernel fisher discrimination (Haasdonk and Pekalska, 2008), empirical feature space approaches (Alabdulmohsin et al, 2016), or correcting the eigenspectrum to psd.…”
Section: Introductionmentioning
confidence: 99%
“…However, unlike supervised learning, similarity measure in the unsupervised scheme for categorical data has received much less attention until now [8], [9]. Without both the label information and the numerical attributes, it's much challengeable to distinguish different categorical values [10]. Currently, only limited efforts have been made, mainly including matching-based [11], frequency-based [12], and information theory [13] based methods.…”
Section: Introductionmentioning
confidence: 99%