Proceedings of the 29th ACM International Conference on Information &Amp; Knowledge Management 2020
DOI: 10.1145/3340531.3411973
|View full text |Cite
|
Sign up to set email alerts
|

EPNet: Learning to Exit with Flexible Multi-Branch Network

Abstract: Dynamic inference is an emerging technique that reduces the computational cost of deep neural network under resource-constrained scenarios, such as inference on mobile devices. One way to achieve dynamic inference is to leverage multi-branch neural networks that apply different computation on input data by following different branches. Conventional research on multi-branch neural networks mainly targeted at improving the accuracy of each branch, and use manually designed rules to decide which input follows whi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(8 citation statements)
references
References 15 publications
0
8
0
Order By: Relevance
“…3 (b)), where early features can be propagated to deep layers if needed. Based on this such architecture design, early exiting can be achieved according to confidence-based criteria [43], [48] or learned decision functions [44], [49], [50], [51]. Note that the confidence-based exiting policy consumes no extra computation during inference, while usually requiring tuning the threshold(s) on the validation set.…”
Section: Dynamic Depthmentioning
confidence: 99%
“…3 (b)), where early features can be propagated to deep layers if needed. Based on this such architecture design, early exiting can be achieved according to confidence-based criteria [43], [48] or learned decision functions [44], [49], [50], [51]. Note that the confidence-based exiting policy consumes no extra computation during inference, while usually requiring tuning the threshold(s) on the validation set.…”
Section: Dynamic Depthmentioning
confidence: 99%
“…This technique generally cannot be applied to binary classification and regression tasks. On the other hand, BERxiT [34] and Epnet [6] use learned modules for early-exiting.…”
Section: Early-exiting Networkmentioning
confidence: 99%
“…Expectedly, one may wonder why not to learn network weights and the exit policy jointly. To this direction there has been work approaching the exit policy in differentiable [6,60] and non-differentiable [8] ways. In essence, instead of explicitly measuring the exit's confidence, the decision on whether to exit can be based on the feature maps of the exits themselves.…”
Section: Deploying the Networkmentioning
confidence: 99%
“…Scardapane et al [60] Vision/Classification Differentiable jointly learned exit policy on EE-networks. EpNet [8] Vision/Classification Non-differentiable exit policy for EE-networks. Chen et al [6] Vision/{Classification, Denoising} Jointly learned variational exit policy for EE-networks.…”
Section: Learnable Exit Policiesmentioning
confidence: 99%