2021
DOI: 10.1609/aaai.v35i11.17205
|View full text |Cite
|
Sign up to set email alerts
|

Nearest Neighbor Classifier Embedded Network for Active Learning

Abstract: Deep neural networks (DNNs) have been widely applied to active learning. Despite of its effectiveness, the generalization ability of the discriminative classifier (the softmax classifier) is questionable when there is a significant distribution bias between the labeled set and the unlabeled set. In this paper, we attempt to replace the softmax classifier in deep neural network with a nearest neighbor classifier, considering its progressive generalization ability within the unknown sub-space. Our proposed activ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…RMQCAL [104] is a novel scalable DAL method, which allows for any number and type of query criteria, eliminates the need for empirical parameters, and makes the tradeoffs between the query criteria self-adaptive. On the other hand, Wan et al [189] propose an embedded network of nearest-neighbor classifiers to enhance the generalization ability of models trained in labeled and unlabeled subspaces in a simple but effective manner. Deng et al [190] focus on combining sample annotation and counterfactual sample construction in the DAL procedure to enhance the model's out-of-distribution generalization.…”
Section: Challenges and Opportunities Of Dalmentioning
confidence: 99%
“…RMQCAL [104] is a novel scalable DAL method, which allows for any number and type of query criteria, eliminates the need for empirical parameters, and makes the tradeoffs between the query criteria self-adaptive. On the other hand, Wan et al [189] propose an embedded network of nearest-neighbor classifiers to enhance the generalization ability of models trained in labeled and unlabeled subspaces in a simple but effective manner. Deng et al [190] focus on combining sample annotation and counterfactual sample construction in the DAL procedure to enhance the model's out-of-distribution generalization.…”
Section: Challenges and Opportunities Of Dalmentioning
confidence: 99%
“…Another current state-of-the-art work [1] uses k-means++ to achieve diverse acquired samples, along with an acquisition function built on the value of loss gradients relative to model parameters as the value of potential model change. In the work [34] the authors suggest using KNN classifier as the output layer of the network instead of softmax, due to better generalization ability to the unknown space.…”
Section: Related Workmentioning
confidence: 99%
“…For example, Batchbald [40] use the method of minimizing conditional entropy as the basis for finding a decision function to greedily select samples that can minimize the entropy of the current model. In order to solve the overconfident prediction of samples out of distribution caused by the model's dependence on classification probability, NN classifier [41] designs a classifier based on the nearest neighbour and support vector to make the model generate high uncertainty for samples far from the existing training data. Different from the above methods, CoreLog [42] gives the measure of uncertainty based on proper scoring rules.…”
Section: Related Workmentioning
confidence: 99%