2017
DOI: 10.14232/actacyb.23.2.2017.13
|View full text |Cite
|
Sign up to set email alerts
|

Balanced Active Learning Method for Image Classification

Abstract: The manual labeling of natural images is and has always been painstaking and slow process, especially when large data sets are involved. Nowadays, many studies focus on solving this problem, and most of them use active learning, which offers a solution for reducing the number of images that need to be labeled. Active learning procedures usually select a subset of the whole data by iteratively querying the unlabeled instances based on their predicted informativeness. One way of estimating the information conten… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…Therefore, to evaluate the first step we used sensitivity (TP rate) and specificity (TN rate). Moreover, we used the accuracy (Equation 39) and the balanced accuracy 1 (Equation 40) to measure the performance of the overall classification task [35], because unbalanced data might mislead the conclusion [29]. Lastly, we used mean squared error for evaluating the goodness of Formula 2 using Formula 37 (∆A K+1 ), and aggregating these results of all the experiments.…”
Section: Metricsmentioning
confidence: 99%
“…Therefore, to evaluate the first step we used sensitivity (TP rate) and specificity (TN rate). Moreover, we used the accuracy (Equation 39) and the balanced accuracy 1 (Equation 40) to measure the performance of the overall classification task [35], because unbalanced data might mislead the conclusion [29]. Lastly, we used mean squared error for evaluating the goodness of Formula 2 using Formula 37 (∆A K+1 ), and aggregating these results of all the experiments.…”
Section: Metricsmentioning
confidence: 99%
“…(ŷ |̂, ) = ∑ (̂, ) =1 (14) where , are the samples, and their associated labels from the = {( , )} =1 support set, and (•,•) is the kernel (also known as attention kernel or attention mechanism). It is worth noting that the above relation produces the output (label) of the samples of the new classes as a linear combination of the sample labels in the support set.…”
Section: Figure 1 Matching Network Architecturementioning
confidence: 99%
“…This scenario is a particularly hot issue nowadays: how could a new disease for which only limited data are available be diagnosed using features of previous diseases? (If the number of labeled data is small, but the huge amount of unlabeled data is available, then this can lead to active learning [14], but in this paper, we consider that there is no unlabeled data at all. )…”
Section: Introductionmentioning
confidence: 99%
“…uncertainty sampling [6], query-by-committee (QBC) [25], expected model change [2], expected error reduction [14], or density-weighted method [1]. On the other hand, there are recently proposed query strategies, like uncertainty sampling with diversity maximization [29], Balanced Active Learning (BAL) method [17], extended margin and soft balanced strategy [18], Prototype Based Active Learning (PBAC) algorithm [3] and the hybrid, Expected Difference Change (EDC) [19]. However, these approaches expect the L to be not empty, because all of them applies some kind of supervised machine learning algorithm (e.g.…”
Section: Related Workmentioning
confidence: 99%