2015
DOI: 10.1016/j.patcog.2014.07.032
|View full text |Cite
|
Sign up to set email alerts
|

Quantification-oriented learning based on reliable classifiers

Abstract: Real-world applications demand effective methods to estimate the class distribution of a sample.In many domains, this is more productive than seeking individual predictions. At a first glance, the straightforward conclusion could be that this task, recently identified as quantification, is as simple as counting the predictions of a classifier. However, due to natural distribution changes occurring in real-world problems, this solution is unsatisfactory. Moreover, current quantification models based on classifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
82
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 57 publications
(82 citation statements)
references
References 29 publications
0
82
0
Order By: Relevance
“…Finally, there are quantifiers based on traditional learning methods, like instance-based learning (Barranquero et al, 2013) or decision trees (Milli et al, 2013), but also using more recent approaches, like ensembles Pérez-Gallego et al (2017) and structured output learning Sebastiani, 2010, 2015;Barranquero et al, 2015). In particular (Barranquero et al, 2015) presents a method, called Q, based on building a classifier that optimizes a loss function (Qmeasure), inspired in the popular F-measure, that combines the classification and the quantification performance of the model through a parameter β.…”
Section: Other Quantification Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, there are quantifiers based on traditional learning methods, like instance-based learning (Barranquero et al, 2013) or decision trees (Milli et al, 2013), but also using more recent approaches, like ensembles Pérez-Gallego et al (2017) and structured output learning Sebastiani, 2010, 2015;Barranquero et al, 2015). In particular (Barranquero et al, 2015) presents a method, called Q, based on building a classifier that optimizes a loss function (Qmeasure), inspired in the popular F-measure, that combines the classification and the quantification performance of the model through a parameter β.…”
Section: Other Quantification Methodsmentioning
confidence: 99%
“…In particular (Barranquero et al, 2015) presents a method, called Q, based on building a classifier that optimizes a loss function (Qmeasure), inspired in the popular F-measure, that combines the classification and the quantification performance of the model through a parameter β. One difficulty in implementing this idea is that not all binary learners are capable of optimizing this kind of metric, because such loss functions are not decomposable as a linear combination of the individual errors.…”
Section: Other Quantification Methodsmentioning
confidence: 99%
“…For this reason, most of the experiments reported in literature employ datasets taken from other problems, like classification or regression, depending on the quantification learning task studied, see for instance [Forman 2008;Bella et al 2010;Barranquero et al 2015]. In all these cases, the authors create drifted testing sets artificially.…”
Section: Experimental Designsmentioning
confidence: 99%
“…Maybe the most popular [Tang et al 2010;Alaiz-Rodríguez et al 2008;Barranquero et al 2015] performance measure for binary (and multi-class) quantification is Kullback-Leibler Divergence, also known as discrimination information, relative entropy or normalized cross-entropy (see [Esuli and Sebastiani 2010;Forman 2008]). KL Divergence is a special case of the family of f-divergences and it can be defined for binary quantification as:…”
Section: Performance Measures For Binary Quantificationmentioning
confidence: 99%
See 1 more Smart Citation