2021
DOI: 10.1016/j.asoc.2020.107046
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty in Bayesian deep label distribution learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(25 citation statements)
references
References 37 publications
0
25
0
Order By: Relevance
“…Inspired by confident learning (Northcutt et al (2021)), we develop a matrix of label distribution with a Gaussian before estimating the uncertainty of the labels. For the data distribution of each label value, we use a Gaussian distribution to delimit the distribution rather than other multi-peaked distribution priors, and numerous kinds of literature have verified that this approach can eliminate the uncertainty (Liu et al (2021); Zheng et al (2021b); Ghosh et al (2021); Li et al (2022)). Furthermore, to capture the global correlation between labels to generate a standard label distribution, we employ a self-attention mechanism to model the label distribution matrix.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Inspired by confident learning (Northcutt et al (2021)), we develop a matrix of label distribution with a Gaussian before estimating the uncertainty of the labels. For the data distribution of each label value, we use a Gaussian distribution to delimit the distribution rather than other multi-peaked distribution priors, and numerous kinds of literature have verified that this approach can eliminate the uncertainty (Liu et al (2021); Zheng et al (2021b); Ghosh et al (2021); Li et al (2022)). Furthermore, to capture the global correlation between labels to generate a standard label distribution, we employ a self-attention mechanism to model the label distribution matrix.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…There are two different types of uncertainty in machine learning: epistemic uncertainty and aleatoric uncertainty [2,24,56].…”
Section: Uncertainty In Machine Learningmentioning
confidence: 99%
“…Distribution function for training data [39]. Some further tuning functions can be achieved through expert parameters [40]. Bernoulli distribution can be used for binominal atau 2-class polynominal labels.…”
Section: Bernoulli Distributionmentioning
confidence: 99%