2006
DOI: 10.1109/tpami.2006.52
|View full text |Cite
|
Sign up to set email alerts
|

Estimation of high-density regions using one-class neighbor machines

Abstract: In this paper, we investigate the problem of estimating high-density regions from univariate or multivariate data samples. We estimate minimum volume sets, whose probability is specified in advance, known in the literature as density contour clusters. This problem is strongly related to One-Class Support Vector Machines (OCSVM). We propose a new method to solve this problem, the One-Class Neighbor Machine (OCNM) and we show its properties. In particular, the OCNM solution asymptotically converges to the exact … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0
1

Year Published

2006
2006
2019
2019

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 57 publications
(40 citation statements)
references
References 7 publications
0
39
0
1
Order By: Relevance
“…Muñoz and Moguerza propose the One-Class Neighbour Machine (OCNM) algorithm for estimating minimum volume sets [16]. The OCNM algorithm is a block-based procedure that provides a binary decision function indicating whether x t is a member of the MVS or not.…”
Section: B Minimum Volume Setsmentioning
confidence: 99%
“…Muñoz and Moguerza propose the One-Class Neighbour Machine (OCNM) algorithm for estimating minimum volume sets [16]. The OCNM algorithm is a block-based procedure that provides a binary decision function indicating whether x t is a member of the MVS or not.…”
Section: B Minimum Volume Setsmentioning
confidence: 99%
“…Regarding further research, a natural extension is to study the application of this methodology to non-supervised problems (see, for instance, Muñoz and Moguerza 2006), and straightforwardly its use within other kernel-based classification methods.…”
Section: Discussionmentioning
confidence: 99%
“…It is important to remark that the representation z x is calculated (for every data point) using always the training data set, that is, we are defining a kernel whose expression in closed-form depends on a fixed data sample (the training set, see for instance, Muñoz and Moguerza 2006). From a computational point of view, this technique is very cheap in practice as it only requires a product of two matrices and it is advisable when the size of the kernel matrix involved is large.…”
Section: Second Power Of the Kernel Matrixmentioning
confidence: 99%
“…We use a nearest centroid (NC) method for classification, in which a new measurement x i is assigned to category j, for which the distance between w T x i and w T μ j is minimal. Many other classifier choices are possible in the reduced R c−1 space, including nearest neighbor (NN) [20], nearest subspace [38], support vector machines (SVM) [42], and sparse representation for classification (SRC) [61]. While it is possible that these classifiers may improve the accuracy of the decision, we restricted our attention to NC since the focus of this work is not on the specific classifier algorithm, but rather on the method of learning sensor locations.…”
Section: Pca Lda and Classificationmentioning
confidence: 99%