Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2018
DOI: 10.1145/3219819.3220026
|View full text |Cite
|
Sign up to set email alerts
|

Cost-Effective Training of Deep CNNs with Active Model Adaptation

Abstract: Deep convolutional neural networks have achieved great success in various applications. However, training an effective DNN model for a specific task is rather challenging because it requires a prior knowledge or experience to design the network architecture, repeated trial-and-error process to tune the parameters, and a large set of labeled data to train the model. In this paper, we propose to overcome these challenges by actively adapting a pre-trained model to a new task with less labeled examples. Specifica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 53 publications
(24 citation statements)
references
References 37 publications
0
24
0
Order By: Relevance
“…Once available, the usage of open source big crystal growth data will resolve the last bottleneck for ANN applications and will strongly push the development of new breakthrough crystalline material-based technologies. Until then, the volume of required training data may be reduced by using advanced machine learning methods known as active learning [49][50][51][52][53].…”
Section: Discussionmentioning
confidence: 99%
“…Once available, the usage of open source big crystal growth data will resolve the last bottleneck for ANN applications and will strongly push the development of new breakthrough crystalline material-based technologies. Until then, the volume of required training data may be reduced by using advanced machine learning methods known as active learning [49][50][51][52][53].…”
Section: Discussionmentioning
confidence: 99%
“…Deep Transfer Learning: Most existing deep transfer learning methods transfer knowledge across domains while assuming the target and source models have equivalent modeling and/or data representation capacities. For example, deep domain adaptation have focused mainly on learning domain-invariant representations between very specific domains (e.g., image data) on [4,6,9,11,16,22,32,42,46]. Furthermore, this can only be achieved by training both models jointly on source and target domain data.…”
Section: Related Workmentioning
confidence: 99%
“…Nevertheless, all above methods are proposed for shallow learning, which cannot be directly applied to deep learning models. In the field of deep learning, Huang et al [ 31 ] proposed an active learning method to estimate the usefulness of samples based on two criteria, which are respectively called distinctiveness and uncertainty. The distinctiveness is obtained by combining the feature information from early to later layers, and the uncertainty of the sample is obtained by combining the maximum entropy.…”
Section: Related Workmentioning
confidence: 99%