2020
DOI: 10.1007/s41095-020-0185-5
|View full text |Cite
|
Sign up to set email alerts
|

Weight asynchronous update: Improving the diversity of filters in a deep convolutional network

Abstract: Deep convolutional networks have obtained remarkable achievements on various visual tasks due to their strong ability to learn a variety of features. A well-trained deep convolutional network can be compressed to 20%–40% of its original size by removing filters that make little contribution, as many overlapping features are generated by redundant filters. Model compression can reduce the number of unnecessary filters but does not take advantage of redundant filters since the training phase is not affected. Mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…Moreover, it is robust in the occlusion or undetected joints scenario and generalizes well for the in the wild scenario. In the future work, we will optimize our network [45,46] for better time performance and apply the idea of LCMDN into more extended fields, such as visual tracking [47], multi-view and multi-person pose estimation [46,48], or 3D human hand pose estimation [49].…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, it is robust in the occlusion or undetected joints scenario and generalizes well for the in the wild scenario. In the future work, we will optimize our network [45,46] for better time performance and apply the idea of LCMDN into more extended fields, such as visual tracking [47], multi-view and multi-person pose estimation [46,48], or 3D human hand pose estimation [49].…”
Section: Discussionmentioning
confidence: 99%
“…Residual operation is introduced to neural network by He et al [31] to ease the model optimization difficulty and it now has been widely applied to image classification, image dehazing, object detection, etc. [32,33,34,35,36] Compared with the classic residual operation, using parallel RNN module as residual component has following characterstics: 1) Compound residual operation: parallel RNN module is a generalization of the classic residual operation, which adds linear projection and bias to inputs. When all internal parameters are initialized to identity matrix, it degenerates to the classic residual operation.…”
Section: Parallel Rnn Modulementioning
confidence: 99%
“…Zhang et al [4] proposed a method for updating a specific subfilter cascade chosen dynamically during training to produce more diverse convolutional filters and reduce overlap in representation. A more general examination of this problem found solutions like structuring a network to resemble the knowledge base, which could be achieved either manually [5] or through a training process [6].…”
Section: Introductionmentioning
confidence: 99%