2015
DOI: 10.1007/s10044-015-0452-8
|View full text |Cite
|
Sign up to set email alerts
|

Editing training data for multi-label classification with the k-nearest neighbor rule

Abstract: International audienceMulti-label classification allows instances to belong to several classes at once. It has received significant attention in machine learning and has found many real world applications in recent years, such as text categorization, automatic video annotation and functional genomics, resulting in the development of many multi-label classification methods. Based on labelled examples in the training dataset, a multi-labelled method extracts inherent information in order to output a function tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 54 publications
(22 citation statements)
references
References 43 publications
0
19
0
1
Order By: Relevance
“…The Wilcoxon paired signed-rank test was used only when two models were compared [60][61][62]. The Friedman test was used for the multiple model comparisons [63][64][65] since multiple The first and second rows show the rate of occurrence of line and polygon graphs as density using the reference data. An area with a high occurrence rate means that the majority of graphs were plotted over the area.…”
Section: Lake Tappsmentioning
confidence: 99%
“…The Wilcoxon paired signed-rank test was used only when two models were compared [60][61][62]. The Friedman test was used for the multiple model comparisons [63][64][65] since multiple The first and second rows show the rate of occurrence of line and polygon graphs as density using the reference data. An area with a high occurrence rate means that the majority of graphs were plotted over the area.…”
Section: Lake Tappsmentioning
confidence: 99%
“…Currently there have been a number of mature classification algorithms, such as K-Nearest Neighbor [5], Support Vector Machine [6], Decision Tree [7], Artificial Neural Network [8] and so on. These classification algorithms have been successfully applied in people's life, production, transportation and other fields.…”
Section: Related Workmentioning
confidence: 99%
“…Regarding relabelling procedures, many research have been carried out to identify suspect examples with the intention to suppress or relabel them into a concurrent more appropriate class [15,20]. This is generally done to enhance the performance.…”
Section: Related Workmentioning
confidence: 99%