2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00291
|View full text |Cite
|
Sign up to set email alerts
|

CondenseNet: An Efficient DenseNet Using Learned Group Convolutions

Abstract: Deep neural networks are increasingly used on mobile devices, where computational resources are limited. In this paper we develop CondenseNet, a novel network architecture with unprecedented efficiency. It combines dense connectivity with a novel module called learned group convolution. The dense connectivity facilitates feature re-use in the network, whereas learned group convolutions remove connections between layers for which this feature re-use is superfluous. At test time, our model can be implemented usi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
575
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 717 publications
(576 citation statements)
references
References 35 publications
1
575
0
Order By: Relevance
“…They demonstrated that such sparsity reduces computational cost and number of parameters without compromising accuracy. Huang et al [27] proposed a similar concept, but differs in that the filter groups do not operate on mutually exclusive sets of features. Here we adapt the concept of filter groups to the multi-task learning paradigm and propose an exten-Input (ii) increasing task specialisation (i) uniform splits (iv) other (iii) asymmetrical Figure 4: Illustration of feature routing.…”
Section: Stochastic Filter Groupsmentioning
confidence: 99%
“…They demonstrated that such sparsity reduces computational cost and number of parameters without compromising accuracy. Huang et al [27] proposed a similar concept, but differs in that the filter groups do not operate on mutually exclusive sets of features. Here we adapt the concept of filter groups to the multi-task learning paradigm and propose an exten-Input (ii) increasing task specialisation (i) uniform splits (iv) other (iii) asymmetrical Figure 4: Illustration of feature routing.…”
Section: Stochastic Filter Groupsmentioning
confidence: 99%
“…ShuffleNets [11], [12] utilize low-cost group convolution and channel shuffle. Learning of the group convolution is used across layers in CondenseNet [13]. On the other hand, faster object detections has been achieved in YOLO [14] by introducing a single-stage detection pipeline, where region proposition and classification is performed by one single network simultaneously.…”
Section: Introductionmentioning
confidence: 99%
“…The accuracy of those models grows with their complexity, leading to redundant latent representations. Several approaches have been proposed in the literature to reduce this redundancy [8,9,10,11,12], and therefore to improve their efficiency.…”
Section: Introductionmentioning
confidence: 99%