2022
DOI: 10.1016/j.patter.2021.100428
|View full text |Cite
|
Sign up to set email alerts
|

Robust importance sampling for error estimation in the context of optimal Bayesian transfer learning

Abstract: Classification has been a major task for building intelligent systems because it enables decision-making under uncertainty. Classifier design aims at building models from training data for representing feature-label distributions-either explicitly or implicitly. In many scientific or clinical settings, training data are typically limited, which impedes the design and evaluation of accurate classifiers. Atlhough transfer learning can improve the learning in target domains by incorporating data from relevant sou… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 26 publications
0
4
0
Order By: Relevance
“…By assigning appropriate weights to each sample drawn from the importance distribution, Importance Sampling can adjust the synthetic data points, giving more significance to those that align well with the target population's distribution. In recent work [44], Importance Sampling helped handle uncertainty in small-sample scenarios during classification tasks by selecting and weighting data points from an alternative distribution. This approach enabled more accurate estimation of classification errors, providing a robust and reliable assessment of classifier performance, even in situations with limited training data.…”
Section: Addressing Data Biasmentioning
confidence: 99%
“…By assigning appropriate weights to each sample drawn from the importance distribution, Importance Sampling can adjust the synthetic data points, giving more significance to those that align well with the target population's distribution. In recent work [44], Importance Sampling helped handle uncertainty in small-sample scenarios during classification tasks by selecting and weighting data points from an alternative distribution. This approach enabled more accurate estimation of classification errors, providing a robust and reliable assessment of classifier performance, even in situations with limited training data.…”
Section: Addressing Data Biasmentioning
confidence: 99%
“…This section review model selection in the context of error estimation under all settings i.e., traditional machine learning, domain adaption and transfer learning. (Maddouri et al, 2022)…”
Section: Related Workmentioning
confidence: 99%
“…In parametric methods, there is the popular plug-in estimator, which naively estimates the true error from an empirical model (Maddouri et al, 2022). A major drawback of this approach is that it is strongly dependent on parameter estimation, which may lead to catastrophic failures.…”
Section: Parametric Error Estimationmentioning
confidence: 99%
“…There are two broad categories of error or validation estimation schemes: parametric and non-parametric (Maddouri et al, 2022;Liu and Yang, 2011). Non-parametric estimates compute the error rate by counting the misclassified points, with widely used estimators which includes re-substitution, crossvalidation (CV) and bootstrap estimators.Parametric estimates the true error from an empirical model.…”
Section: Error Estimationmentioning
confidence: 99%