2020 25th International Conference on Pattern Recognition (ICPR) 2021
DOI: 10.1109/icpr48806.2021.9411962
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Active Learning for Maximal Information Gain on Model Parameters

Abstract: The fact that machine learning models, despite their advancements, are still trained on randomly gathered data is proof that a lasting solution to the problem of optimal data gathering has not yet been found. In this paper, we investigate whether a Bayesian approach to the classification problem can provide assumptions under which one is guaranteed to perform at least as good as random sampling. For a logistic regression model, we show that maximal expected information gain on model parameters is a promising c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 21 publications
(33 reference statements)
0
1
0
Order By: Relevance
“…As the reader may have noticed, the goal of AL aligns with ASD in the early stage when the model has high uncertainty and the predictions are high unreliable, while in the later stage ASD aims at better discovery performance. Indeed, there are active learning algorithms based on the idea of maximum information gain [3]. [27] applies the idea to the matrix completion problem, which significantly improves the prediction accuracy from random selection.…”
Section: Introductionmentioning
confidence: 99%
“…As the reader may have noticed, the goal of AL aligns with ASD in the early stage when the model has high uncertainty and the predictions are high unreliable, while in the later stage ASD aims at better discovery performance. Indeed, there are active learning algorithms based on the idea of maximum information gain [3]. [27] applies the idea to the matrix completion problem, which significantly improves the prediction accuracy from random selection.…”
Section: Introductionmentioning
confidence: 99%