2016
DOI: 10.1016/j.patcog.2016.02.017
|View full text |Cite
|
Sign up to set email alerts
|

Labelling strategies for hierarchical multi-label classification techniques

Abstract: Many hierarchical multi-label classification systems predict a real valued score for every (instance, class) couple, with a higher score reflecting more confidence that the instance belongs to that class. These classifiers leave the conversion of these scores to an actual label set to the user, who applies a cut-off value to the scores. The predictive performance of these classifiers is usually evaluated using threshold independent measures like precision-recall curves. However, several applications require ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 41 publications
(18 citation statements)
references
References 34 publications
0
18
0
Order By: Relevance
“…Almeida and Borges [26] proposed an adaptation of K-Nearest Neighbours to address quantification learning in HMC. Similarly, Triguero and Vens [27] investigated how different thresholds can increase the performance of Predictive Clustering Trees in this context.…”
Section: Related Workmentioning
confidence: 99%
“…Almeida and Borges [26] proposed an adaptation of K-Nearest Neighbours to address quantification learning in HMC. Similarly, Triguero and Vens [27] investigated how different thresholds can increase the performance of Predictive Clustering Trees in this context.…”
Section: Related Workmentioning
confidence: 99%
“…The parameter k is often set to a fixed value in other research, or only iterated over a small set of possible values (e.g. 5,10,15). However, optimizing k can have a significant effect on reported evaluation metric values.…”
Section: Discussionmentioning
confidence: 99%
“…Finally we apply a single threshold to obtain the bipartition as different authors have experimentally verified this is as efficient as the more complex methods [3] [15]. We determine the threshold t min automatically by selecting the value of t min that minimizes the difference in label cardinality between the actual and predicted label set over all training instances.…”
Section: A Instance Based Knnmentioning
confidence: 99%
“…Exploring whether a single threshold is appropriate for all of the labels, or whether multiple thresholds, one per label, should be used, is a promising line of future work. Specifically, examining the thresholding strategies of Tsoumakas and Katakis (2007) and Largeron et al (2012) as well as the work of Triguero and Vens (2016) and determining if and how their results can be applied in the streaming setting will be our first step along this avenue.…”
Section: Discussionmentioning
confidence: 99%