2018 24th International Conference on Pattern Recognition (ICPR) 2018
DOI: 10.1109/icpr.2018.8545819
|View full text |Cite
|
Sign up to set email alerts
|

Subspace Support Vector Data Description

Abstract: In this paper, we propose a novel method for projecting data from multiple modalities to a new subspace optimized for one-class classification. The proposed method iteratively transforms the data from the original feature space of each modality to a new common feature space along with finding a joint compact description of data coming from all the modalities. For data in each modality, we define a separate transformation to map the data from the corresponding feature space to the new optimized subspace by expl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(26 citation statements)
references
References 78 publications
0
26
0
Order By: Relevance
“…It thus explains why anomaly detection can benefit from minimizing the reconstruction error. Link to one-class classification based approaches: Some one-class classification based methods [4,14,15] minimize the volume of a data-enclosing hyper-sphere in latent space, which is mathematically shown equivalent to minimizing the upper bound on the entropy of latent space [16]. Thus this kind of method can be linked to optimizing the entropy term H n (z) in Eq.…”
Section: Relation To Existing Algorithmsmentioning
confidence: 99%
“…It thus explains why anomaly detection can benefit from minimizing the reconstruction error. Link to one-class classification based approaches: Some one-class classification based methods [4,14,15] minimize the volume of a data-enclosing hyper-sphere in latent space, which is mathematically shown equivalent to minimizing the upper bound on the entropy of latent space [16]. Thus this kind of method can be linked to optimizing the entropy term H n (z) in Eq.…”
Section: Relation To Existing Algorithmsmentioning
confidence: 99%
“…Subsequent extensions of SVDD and OC-SVM were proposed to improve their accuracy such as the Graph Embedded OC-SVM (GE-OC-SVM) and graph embedded SVDD (GE-SVDD) 27 or, more recently, the subspace-SVDD (S-SVDD). 28 The first approach based on deep learning is the deep-SVDD. 29 Alternatively, there are other approaches that employ a set of ellipsoids to fit the region of the data space, 30,31 or others that employ a family of convex hulls for one-class classification 32 like the well-known approximate polytope ensemble algorithm (APE).…”
Section: Related Workmentioning
confidence: 99%
“…Later on, several more popular methods were developed like support vector data description (SVDD) 25 or the one‐class support vector machine (OC‐SVM), 26 that has become one of the most successful tool for one‐class classification. Subsequent extensions of SVDD and OC‐SVM were proposed to improve their accuracy such as the Graph Embedded OC‐SVM (GE‐OC‐SVM) and graph embedded SVDD (GE‐SVDD) 27 or, more recently, the subspace‐SVDD (S‐SVDD) 28 . The first approach based on deep learning is the deep‐SVDD 29 .…”
Section: Related Workmentioning
confidence: 99%
“…where, as in OC-SVM, the ξ's model the slack. There have been extensions of this scheme, such as the mSVDD that uses a mixture of such hyperspheres [29], density-induced SVDD [30], using kernelized variants [52], and more recently, to use subspaces for data description [49]. A major drawback of SVDD in general is the strong assumption it makes on the isotropic nature of the underlying data distribution.…”
Section: Background and Related Workmentioning
confidence: 99%