2022
DOI: 10.48550/arxiv.2203.13450
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Comparative Survey of Deep Active Learning

Abstract: Active Learning (AL) is a set of techniques for reducing labeling cost by sequentially selecting data samples from a large unlabeled data pool for labeling. Meanwhile, Deep Learning (DL) is datahungry, and the performance of DL models scales monotonically with more training data. Therefore, in recent years, Deep Active Learning (DAL) has risen as feasible solutions for maximizing model performance while minimizing the expensive labeling cost. Abundant methods have sprung up and literature reviews of DAL have b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 12 publications
(21 citation statements)
references
References 33 publications
0
19
0
Order By: Relevance
“…The more professional the annotators are especially in the target sports domain, the better the quality of the annotations are, which leads to promising performance of action recognition algorithms in real inference tasks. One possible direction is using active learning approaches [314]- [316] to reduce the workload of annotation; 3) Multi-purpose: As a general trend, the video dataset for the actions recognition is rarely with only one purpose, so are sports datasets. Some of the video datasets [317], [318] also are designed to accomplish the temporal action localization, spatio-temporal action localization, and complex event understanding.…”
Section: Challengesmentioning
confidence: 99%
“…The more professional the annotators are especially in the target sports domain, the better the quality of the annotations are, which leads to promising performance of action recognition algorithms in real inference tasks. One possible direction is using active learning approaches [314]- [316] to reduce the workload of annotation; 3) Multi-purpose: As a general trend, the video dataset for the actions recognition is rarely with only one purpose, so are sports datasets. Some of the video datasets [317], [318] also are designed to accomplish the temporal action localization, spatio-temporal action localization, and complex event understanding.…”
Section: Challengesmentioning
confidence: 99%
“…While diversity-aware methods work well on the small datasets, they might fail to scale-up over large datasets due to the needs of subset comparisons and selections. The uncertainty-aware methods [23][24][25][26][27][28] screen the pool of unlabeled samples and select samples with top uncertainty in the context of training model (e.g., LTR models here) for labeling. While uncertainty-aware methods could easily scale-up over the large datasets due to the low complexity, a wide variety of uncertainty criteria have been proposed, such as Monte Carlo estimation of expected error reduction [29], distance to the decision boundary [30,31], margin between posterior probabilities [32], and entropy of posterior probabilities [33][34][35].…”
Section: Related Workmentioning
confidence: 99%
“…Recently, deep active learning has become a very active branch of research [Ren et al, 2021, Zhan et al, 2022, driven mostly by the need to reduce the typically large amounts of data required to train deep classification models. Deep model structures to which active learning has been applied include stacked Restricted Boltzmann Machines [Wang and Shang, 2014], variational adversarial networks [Sinha et al, 2019], as well as CNNs [Wang et al, 2016, Gal et al, 2017.…”
Section: Related Workmentioning
confidence: 99%
“…Like classical active learning, deep active learning uses a variety of criteria for selecting data points. Uncertainty sampling is a major direction here as well [Gal et al, 2017, Yi et al, 2022, Ren et al, 2021, Zhan et al, 2022. A well-known state-of-the-art method in this context is MC Dropout [Gal et al, 2017], which uses dropout at test time in order to create variants of an initially trained deep neural network model; averaging predictions over these model variants then yields estimates of class probabilities that can be used by standard acquisition functions.…”
Section: Related Workmentioning
confidence: 99%