2016
DOI: 10.1007/978-3-319-46675-0_65
|View full text |Cite
|
Sign up to set email alerts
|

Recurrent Neural Networks for Adaptive Feature Acquisition

Abstract: We propose to tackle the cost-sensitive learning problem, where each feature is associated to a particular acquisition cost. We propose a new model with the following key properties: (i) it acquires features in an adaptive way, (ii) features can be acquired per block (several at a time) so that this model can deal with high dimensional data, and (iii) it relies on representation-learning ideas. The effectiveness of this approach is demonstrated on several experiments considering a variety of datasets and with … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(16 citation statements)
references
References 10 publications
0
16
0
Order By: Relevance
“…Based on the well trained random forest classifier, it prunes the RF by linear programming to balance between cost and accuracy [21]. RADIN: An adaptive learning method by using recurrent neural networks(RNN) with attention to select a fixed block of features and also fix the selection step [22]. DWSM:A method to formalize the task as an Markov decision process(MDP) and solve it with linearly approximated Qlearning [13].…”
Section: A Comparison Methodsmentioning
confidence: 99%
“…Based on the well trained random forest classifier, it prunes the RF by linear programming to balance between cost and accuracy [21]. RADIN: An adaptive learning method by using recurrent neural networks(RNN) with attention to select a fixed block of features and also fix the selection step [22]. DWSM:A method to formalize the task as an Markov decision process(MDP) and solve it with linearly approximated Qlearning [13].…”
Section: A Comparison Methodsmentioning
confidence: 99%
“…The model selection is made by training multiple models, selecting the best models on the validation set, and computing their performance on the test set. Note that the 'best models' in terms of both accuracy and FLOPs are the models located on the pareto front of the accuracy/cost validation curve as done for instance in [4]. These models are also evaluated using the matched, correct, wrong and false alarm (FA) metrics as proposed in [16] and computed over the one hour stream provided with the original dataset.…”
Section: Methodsmentioning
confidence: 99%
“…The highlighted architecture is the base model on which we have added shortcut connections. Conv1 and Conv2 have filter sizes of (20,8) and (10,4). Both have 64 channels and Conv1 has a stride of 3 in the frequency domain.…”
Section: Problem Definitionmentioning
confidence: 99%
“…Figure 2: Evaluation of the proposed method on MNIST dataset. Accuracy vs. number of acquired features for OL, RADIN (Contardo et al, 2016), GreedyMiser , and a recent work based on reinforcement learning (RL-Based) (Janisch et al, 2017). , Cronus (Chen et al, 2012), andEarly Exit (Cambazoglu et al, 2010) approaches.…”
Section: Datasets and Experimentsmentioning
confidence: 99%