2020
DOI: 10.1007/s10994-020-05900-9
|View full text |Cite
|
Sign up to set email alerts
|

Imbalanced regression and extreme value prediction

Abstract: Research in imbalanced domain learning has almost exclusively focused on solving classification tasks for accurate prediction of cases labelled with a rare class. Approaches for addressing such problems in regression tasks are still scarce due to two main factors. First, standard regression tasks assume each domain value as equally important. Second, standard evaluation metrics focus on assessing the performance of models on the most common values of data distributions. In this paper, we present an approach to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 68 publications
(49 citation statements)
references
References 52 publications
(51 reference statements)
0
48
0
1
Order By: Relevance
“…Future research should focus on the development of methods of handling long-range dependence and extreme values. As mentioned in Section 3.4, multiple approaches have been proposed recently to train memory over a longer horizon on the basis of the well-established extreme value theory Ding et al [2019], Ribeiro and Moniz [2020].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Future research should focus on the development of methods of handling long-range dependence and extreme values. As mentioned in Section 3.4, multiple approaches have been proposed recently to train memory over a longer horizon on the basis of the well-established extreme value theory Ding et al [2019], Ribeiro and Moniz [2020].…”
Section: Discussionmentioning
confidence: 99%
“…The MSE loss, however, tends to suffer from extreme values, which is common for heavy-tailed distributions because the error is squared. Although numerous efforts have been made to predict extreme values (for example, the use of a memory network module and a variable loss function Ding et al [2019], Ribeiro and Moniz [2020]), these methods require a long training time because they need to access a different dataset for each batch.…”
Section: Maximum Correntropy Criterion Induced Losses For Regressionmentioning
confidence: 99%
“…Max‐min normalization method was used for data preprocessing of meteorological features. The logarithmic function was used for data preprocessing of emission amount due to its extremely uneven spatial distribution (Ribeiro & Moniz, 2020).…”
Section: Model Development Of the Deep Learning Surrogatementioning
confidence: 99%
“…A few were developed for imbalanced regression. Many approaches revolve around modifications of SMOTE such as adapted to regression SMOTER [38], augmented with Gaussian Noise SMOGN [39], or [40] work extending for prediction of extremely rare values. [41] proposed DenseWeight, a method based on Kernel Density Estimation for better assessment of the relevance function for sample reweighing.…”
Section: Related Workmentioning
confidence: 99%