2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00357
|View full text |Cite
|
Sign up to set email alerts
|

CondenseNet V2: Sparse Feature Reactivation for Deep Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 54 publications
(23 citation statements)
references
References 21 publications
0
23
0
Order By: Relevance
“…Multiple studies about lightweight and efficient CNN architectures have been published over the last years [31,23,45,21,50]. These studies introduce different mechanisms.…”
Section: Lightweight and Efficient Architecturesmentioning
confidence: 99%
See 3 more Smart Citations
“…Multiple studies about lightweight and efficient CNN architectures have been published over the last years [31,23,45,21,50]. These studies introduce different mechanisms.…”
Section: Lightweight and Efficient Architecturesmentioning
confidence: 99%
“…Next, Mo-bileNetV3 [23] combines automated network search techniques and optimized nonlinearities on an architecture based on inverted residual blocks. Afterward, GhostNet [21] introduces a novel Ghost module based on linear transformations and, CondenseNetV2 [50] relies on a new Sparse Feature Reactivation module which reuses a set of most important features from preceding layers.…”
Section: Lightweight and Efficient Architecturesmentioning
confidence: 99%
See 2 more Smart Citations
“…Visual grounding (VG) task [13,24,40,65] has achieved great progress in recent years, with the advances in both computer vision [16,20,21,25,26,46,56,57,59] and natural language processing [4,14,41,50,53]. It aims to localize the objects referred by natural language queries, which is essential for various vision-language tasks, e.g., visual question answering [2] and visual commonsense reasoning [67].…”
Section: Introductionmentioning
confidence: 99%