2015
DOI: 10.1109/tnnls.2014.2329534
|View full text |Cite
|
Sign up to set email alerts
|

A One-Class Kernel Fisher Criterion for Outlier Detection

Abstract: Recently, Dufrenois and Noyer proposed a one class Fisher's linear discriminant to isolate normal data from outliers. In this paper, a kernelized version of their criterion is presented. Originally on the basis of an iterative optimization process, alternating between subspace selection and clustering, I show here that their criterion has an upper bound making these two problems independent. In particular, the estimation of the label vector is formulated as an unconstrained binary linear problem (UBLP) which c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(23 citation statements)
references
References 13 publications
0
23
0
Order By: Relevance
“…We believe that the structural information also exists in other learning tasks (e.g., one-class learning [15], [33], ordinalclass learning [31], [34], multiclass learning [35], [36], and so on). In the future, we plan to extend SMPM to one-class learning [15], [33], ordinal-class learning [31], [34], multiclass learning [35], [36], and other learning tasks. Although the binary search procedure can solve SMPM effectively, the running time is not fast enough especially when the training data size (or the feature size) is large.…”
Section: Discussionmentioning
confidence: 99%
“…We believe that the structural information also exists in other learning tasks (e.g., one-class learning [15], [33], ordinalclass learning [31], [34], multiclass learning [35], [36], and so on). In the future, we plan to extend SMPM to one-class learning [15], [33], ordinal-class learning [31], [34], multiclass learning [35], [36], and other learning tasks. Although the binary search procedure can solve SMPM effectively, the running time is not fast enough especially when the training data size (or the feature size) is large.…”
Section: Discussionmentioning
confidence: 99%
“…In [31], an incremental version of the method in [27] is proposed to increase computational efficiency. A generalised Rayleigh quotient specifically designed for outlier detection is presented in [28], [36] where the method tries to find an optimal hyperplane which is closest to the target data and farthest from the outliers utilising two scatter matrices corresponding to the outliers and target data. In [36], the generalised eigenvalue problem is replaced by an approximate conjugate gradient solution to moderate the computational cost of the method in [28].…”
Section: Related Workmentioning
confidence: 99%
“…A generalised Rayleigh quotient specifically designed for outlier detection is presented in [28], [36] where the method tries to find an optimal hyperplane which is closest to the target data and farthest from the outliers utilising two scatter matrices corresponding to the outliers and target data. In [36], the generalised eigenvalue problem is replaced by an approximate conjugate gradient solution to moderate the computational cost of the method in [28]. A later study [37] tries to address limitations of the method in [28], [36] in terms of the availability of outlier samples and difference in the densities of target and nontarget observations via a null-space approach.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The analysis of large volumes of data is hampered by many technical problems, including the ones related to the quality and interpretation of associated information. One-class classifier design is an important research endeavour [1], [2] that can be used to tackle problems of anomaly/novelty detection or, more generally, to recognize outliers in incoming data [3]- [8]. Several different methods have been proposed in the literature, including clustering-based techniques, kernel methods, and statistical approaches (see [9] for a recent survey).…”
Section: Introductionmentioning
confidence: 99%