2018
DOI: 10.1007/s10115-018-1220-z
|View full text |Cite
|
Sign up to set email alerts
|

Instance reduction for one-class classification

Abstract: Instance reduction techniques are data preprocessing methods originally developed to enhance the nearest neighbor rule for standard classification. They reduce the training data by selecting or generating representative examples of a given problem. These algorithms have been designed and widely analyzed in multi-class problems providing very competitive results. However, this issue was rarely addressed in the context of one-class classification. In

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 26 publications
(17 citation statements)
references
References 51 publications
0
17
0
Order By: Relevance
“…Although evolutionary algorithms are efficient in performance, a pitfall of them is the essence of storing all training instances subsets in memory space for large datasets, which might also impact the efficiency of the testing task. A solution to this problem introduced in [28] with an instance selection method that removes redundant and noisy instances. [29] provides a survey of existing techniques used to reduce storage requirements in instance-based learning algorithms, including different reduction and/or reconstruction algorithms of the training set.…”
Section: Related Workmentioning
confidence: 99%
“…Although evolutionary algorithms are efficient in performance, a pitfall of them is the essence of storing all training instances subsets in memory space for large datasets, which might also impact the efficiency of the testing task. A solution to this problem introduced in [28] with an instance selection method that removes redundant and noisy instances. [29] provides a survey of existing techniques used to reduce storage requirements in instance-based learning algorithms, including different reduction and/or reconstruction algorithms of the training set.…”
Section: Related Workmentioning
confidence: 99%
“…Although evolutionary algorithms are efficient in performance, a pitfall of them is the essence of storing all training instances subsets in memory space for large datasets, which might also impact the efficiency of the testing task. A solution to this problem introduced in [25] with an instance selection method that removes redundant and noisy instances. Gong et al [26] provided a survey of existing techniques used to reduce storage requirements in IBL algorithms, including different reduction and/or reconstruction algorithms of the training set.…”
Section: Related Workmentioning
confidence: 99%
“…Regarding the classification stage, the constructed one-class classifiers are able to distinguish between new instances in the target class and unknown instances outside the created decision boundary as anomalies of the minority class. In particular, an anomaly score is assigned to each testing datum by one-class classifiers, which defines the decision boundary to separate normal data from outliers (Khan and Madden, 2014;Krawczyk et al, 2019).…”
Section: Class Imbalanced Datasets 771mentioning
confidence: 99%