2019 IEEE International Conference on Multimedia and Expo (ICME) 2019
DOI: 10.1109/icme.2019.00236
|View full text |Cite
|
Sign up to set email alerts
|

Towards Better Uncertainty Sampling: Active Learning with Multiple Views for Deep Convolutional Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 25 publications
(21 citation statements)
references
References 12 publications
0
21
0
Order By: Relevance
“…The details of each branch structure in our double-branch network are: (a) Branch-1: convolution-relu-maxpooling-dropout- convolution- relu-maxpooling-dropout-convolution-dense-dropout-dense-softmax, (b) Branch-2: convolution- relu-convolution-relu-maxpooling-dropout-dense-relu-dropout-dense-softmax, with 32 convolution kernels, 4 × 4 kernel size, 2 × 2 pooling, dense layer with 128 units, and dropout probabilities are set to 0.25 and 0.5. For the Cifar-10, SVHN, Scene-15 and UIUC-Sports datasets, we replaced the LeNet architecture with the model in [ 21 ].…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…The details of each branch structure in our double-branch network are: (a) Branch-1: convolution-relu-maxpooling-dropout- convolution- relu-maxpooling-dropout-convolution-dense-dropout-dense-softmax, (b) Branch-2: convolution- relu-convolution-relu-maxpooling-dropout-dense-relu-dropout-dense-softmax, with 32 convolution kernels, 4 × 4 kernel size, 2 × 2 pooling, dense layer with 128 units, and dropout probabilities are set to 0.25 and 0.5. For the Cifar-10, SVHN, Scene-15 and UIUC-Sports datasets, we replaced the LeNet architecture with the model in [ 21 ].…”
Section: Methodsmentioning
confidence: 99%
“…The above two baselines utilize the double-branch BCNN as their backbone networks, which is the same as our proposed MALDB. Besides, we also compare the performance of our approach with other existing methods including: max-entropy selection strategy based on Bayesian CNN (BCNN-EN for short) [ 25 ], active learning with multiple views (AL-MV for short) [ 21 ] and standard CNN with random sample selection (CNN for short) [ 3 ].…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations