2010 IEEE International Conference on Data Mining 2010
DOI: 10.1109/icdm.2010.75
|View full text |Cite
|
Sign up to set email alerts
|

Quantification via Probability Estimators

Abstract: Abstract-Quantification is the name given to a novel machine learning task which deals with correctly estimating the number of elements of one class in a set of examples. The output of a quantifier is a real value; since training instances are the same as a classification problem, a natural approach is to train a classifier and to derive a quantifier from it. Some previous works have shown that just classifying the instances and counting the examples belonging to the class of interest (classify & count) typica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
144
0
1

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 102 publications
(145 citation statements)
references
References 5 publications
0
144
0
1
Order By: Relevance
“…In (Bella et al, 2010) a probabilistic version of AC is developed. First the authors introduce a simple method called Probability Average (PA), which is clearly aligned with CC.…”
Section: Other Quantification Methodsmentioning
confidence: 99%
“…In (Bella et al, 2010) a probabilistic version of AC is developed. First the authors introduce a simple method called Probability Average (PA), which is clearly aligned with CC.…”
Section: Other Quantification Methodsmentioning
confidence: 99%
“…For this reason, most of the experiments reported in literature employ datasets taken from other problems, like classification or regression, depending on the quantification learning task studied, see for instance [Forman 2008;Bella et al 2010;Barranquero et al 2015]. In all these cases, the authors create drifted testing sets artificially.…”
Section: Experimental Designsmentioning
confidence: 99%
“…Mean Squared Error (MSE) is preferred for some authors [Bella et al 2010;Amati et al 2014b;Asoh et al 2012] over MAE. The differences between both is that MAE is more robust to outliers and it is more intuitive and easier to interpret than MSE, and the advantage of MSE is that it does not assign equal weight to all mistakes, emphasizing the extreme values whose consequences may be much bigger than the equivalent smaller ones for a particular application.…”
Section: Performance Measures For Binary Quantificationmentioning
confidence: 99%
See 2 more Smart Citations