2018
DOI: 10.48550/arxiv.1807.01430
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SGAD: Soft-Guided Adaptively-Dropped Neural Network

Zhisheng Wang,
Fangxuan Sun,
Jun Lin
et al.

Abstract: Deep neural networks (DNNs) have been proven to have many redundancies. Hence, many efforts have been made to compress DNNs. However, the existing model compression methods treat all the input samples equally while ignoring the fact that the difficulties of various input samples being correctly classified are different. To address this problem, DNNs with adaptive dropping mechanism are well explored in this work. To inform the DNNs how difficult the input samples can be classified, a guideline that contains th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…We suppose that the slow path mainly play a role of capturing features from still images and providing the object instances infor-mation to support the final action recognition, but the slow path dominates the overall memory footprint of the network (≈ 98%). According to the general perspective of many works that focus on model compression of 2D Con-vNets [10,26], we believe that there will be considerable redundancy in the slow path, so a channel bottleneck structure are introduced, which is composed of a depth-wise 3 × 3 kernel and a point-wise 1 × 1 one, to replace the 1×3×3 kernel for further reduction of the model size. This variant is denoted as SlowDepth in Fig.…”
Section: Design Space Exploration On Slowfastmentioning
confidence: 99%
“…We suppose that the slow path mainly play a role of capturing features from still images and providing the object instances infor-mation to support the final action recognition, but the slow path dominates the overall memory footprint of the network (≈ 98%). According to the general perspective of many works that focus on model compression of 2D Con-vNets [10,26], we believe that there will be considerable redundancy in the slow path, so a channel bottleneck structure are introduced, which is composed of a depth-wise 3 × 3 kernel and a point-wise 1 × 1 one, to replace the 1×3×3 kernel for further reduction of the model size. This variant is denoted as SlowDepth in Fig.…”
Section: Design Space Exploration On Slowfastmentioning
confidence: 99%