2019
DOI: 10.1609/aaai.v33i01.33014552
|View full text |Cite
|
Sign up to set email alerts
|

DyS: A Framework for Mixture Models in Quantification

Abstract: Quantification is an expanding research topic in Machine Learning literature. While in classification we are interested in obtaining the class of individual observations, in quantification we want to estimate the total number of instances that belong to each class. This subtle difference allows the development of several algorithms that incur smaller and more consistent errors than counting the classes issued by a classifier. Among such new quantification methods, one particular family stands out due to its ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 12 publications
0
11
0
1
Order By: Relevance
“…• The methods must be Fisher consistent in the sense of Tasche (2017). This criterion excludes for instance 'classify & count' (Forman, 2008), the 'Q-measure' approach (Barranquero et al, 2013) and the distance-minimisation approaches based on the Inner Product, Kumar-Hassebrook, Cosine, and Harmonic Mean distances mentioned in Maletzke et al (2019).…”
Section: Methods For Prevalence Estimation Considered In This Papermentioning
confidence: 99%
See 3 more Smart Citations
“…• The methods must be Fisher consistent in the sense of Tasche (2017). This criterion excludes for instance 'classify & count' (Forman, 2008), the 'Q-measure' approach (Barranquero et al, 2013) and the distance-minimisation approaches based on the Inner Product, Kumar-Hassebrook, Cosine, and Harmonic Mean distances mentioned in Maletzke et al (2019).…”
Section: Methods For Prevalence Estimation Considered In This Papermentioning
confidence: 99%
“…On the basis of the general asymptotic efficiency of maximum likelihood estimators (Theorem 10.1.12, Casella and Berger, 2002), the maximum likelihood approach for class prevalences is a promising approach for achieving minimum confidence intervals lengths. In addition, the ML approach may be considered a representative of the class of entropy-related estimators and, as such, is closely related to the Topsøe approach which was found to perform very well in Maletzke et al (2019).…”
Section: Methods For Prevalence Estimation Considered In This Papermentioning
confidence: 99%
See 2 more Smart Citations
“…Many options have gone untested by Goldenberg and Webb (2018) and we refer the reader to González et al (2017) for more options and to Cha and Srihari (2002) for the particularly interesting ORD, which takes the distance between different bins into account. Maletzke et al (2019) introduces SORD, a version of ORD that exempts the discretization of numeric values to compare univariate sample distributions, and can be seen as a particular and fast-to-compute case of the Earth Mover's Distance.…”
Section: Uncertainty About Changesmentioning
confidence: 99%