2021
DOI: 10.48550/arxiv.2107.04497
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Batch Inverse-Variance Weighting: Deep Heteroscedastic Regression

Abstract: Heteroscedastic regression is the task of supervised learning where each label is subject to noise from a different distribution. This noise can be caused by the labelling process, and impacts negatively the performance of the learning algorithm as it violates the i.i.d. assumptions. In many situations however, the labelling process is able to estimate the variance of such distribution for each label, which can be used as an additional information to mitigate this impact. We adapt an inverse-variance weighted … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 34 publications
(32 reference statements)
0
6
0
Order By: Relevance
“…Batch Inverse-Variance (BIV) weighting (Mai et al, 2021) leverages the additional information σ 2 k , which is assumed to be provided, to learn faster and obtain better performance in the case of heteroscedastic noise on the labels. Applied to L2 loss, it optimizes the neural network parameters θ using the following loss function for a mini-batch D of size K 2 :…”
Section: Batch Inverse-variance Weightingmentioning
confidence: 99%
See 4 more Smart Citations
“…Batch Inverse-Variance (BIV) weighting (Mai et al, 2021) leverages the additional information σ 2 k , which is assumed to be provided, to learn faster and obtain better performance in the case of heteroscedastic noise on the labels. Applied to L2 loss, it optimizes the neural network parameters θ using the following loss function for a mini-batch D of size K 2 :…”
Section: Batch Inverse-variance Weightingmentioning
confidence: 99%
“…As explained in Mai et al (2021), ξ is a hyperparameter which is important for the stability of the optimization process. A higher ξ limits the highest weights, thus preventing very small variance samples from dominating the loss function for a mini-batch.…”
Section: Batch Inverse-variance Weightingmentioning
confidence: 99%
See 3 more Smart Citations