2015
DOI: 10.1007/s10994-015-5504-1
|View full text |Cite
|
Sign up to set email alerts
|

Optimised probabilistic active learning (OPAL)

Abstract: In contrast to ever increasing volumes of automatically generated data, human annotation capacities remain limited. Thus, fast active learning approaches that allow the efficient allocation of annotation efforts gain in importance. Furthermore, cost-sensitive applications such as fraud detection pose the additional challenge of differing misclassification costs between classes. Unfortunately, the few existing cost-sensitive active learning approaches rely on time-consuming steps, such as performing self-labell… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
4

Relationship

1
9

Authors

Journals

citations
Cited by 37 publications
(16 citation statements)
references
References 21 publications
0
16
0
Order By: Relevance
“…at the beginning of the training) and therefore suggested the use of a beta prior. Krempl et al (2015) and Kottke et al (2016) address the issue pointed out by Chapelle and named their approach probabilistic AL. They propose to use a distribution of the class posterior probability instead of using the classifier outputs directly.…”
Section: Introductionmentioning
confidence: 99%
“…at the beginning of the training) and therefore suggested the use of a beta prior. Krempl et al (2015) and Kottke et al (2016) address the issue pointed out by Chapelle and named their approach probabilistic AL. They propose to use a distribution of the class posterior probability instead of using the classifier outputs directly.…”
Section: Introductionmentioning
confidence: 99%
“…Overall, a small improvement in a dense region may thus be more beneficial than a big improvement in a sparse region. This observation motivates a kind of density weighting, i.e., the combination (multiplication) of an uncertainty degree with the (estimated) density of a data point (Krempl et al 2015). • Last but not least, going beyond uncertainty sampling for binary classification as considered in this paper, the idea of epistemic uncertainty sampling should also be extended toward other learning problems, such as multi-class classification and regression.…”
Section: Discussionmentioning
confidence: 99%
“…ADASYN [13] uses a weighted distribution for different minority class according to their level of difficulty in learning and more synthetic data for minority class. A typical algorithm level approach is called cost-sensitive learning [18], [19], [20], [21], [22]. We focus the study of algorithm level approach in this paper.…”
Section: Introductionmentioning
confidence: 99%