2021
DOI: 10.3389/fams.2020.529564
|View full text |Cite
|
Sign up to set email alerts
|

Structured Sparsity of Convolutional Neural Networks via Nonconvex Sparse Group Regularization

Abstract: Convolutional neural networks (CNN) have been hugely successful recently with superior accuracy and performance in various imaging applications, such as classification, object detection, and segmentation. However, a highly accurate CNN model requires millions of parameters to be trained and utilized. Even to increase its performance slightly would require significantly more parameters due to adding more layers and/or increasing the number of filters per layer. Apparently, many of these weight parameters turn o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 85 publications
0
4
0
Order By: Relevance
“…However, in the context of bi‐level selection procedures, alternative combinations have never been systematically investigated, so its full potential has not yet been exploited (Buch et al., 2023). More precisely, the limited approaches were studied independently (Bui et al., 2021; Liu et al., 2013), so their relationship was not formalized, and they were compared neither with each other nor with approaches of the hierarchical framework.…”
Section: Introductionmentioning
confidence: 99%
“…However, in the context of bi‐level selection procedures, alternative combinations have never been systematically investigated, so its full potential has not yet been exploited (Buch et al., 2023). More precisely, the limited approaches were studied independently (Bui et al., 2021; Liu et al., 2013), so their relationship was not formalized, and they were compared neither with each other nor with approaches of the hierarchical framework.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, Wen et al (2016) show that by using the gradient descent methods, less training time is required by DNN with a group sparse regularizer compared with that required by DNN with a lasso regularizer. The group sparse regularizer also appears in convolution neural network (Bui et al, 2021) and other machine learning problems (Meier et al, 2008;Jenatton et al, 2011;Simon et al, 2013), etc. Hence, we focus on training the leaky ReLU network with the l 2,1 regularizer for pursuing the group sparsity.…”
Section: Introductionmentioning
confidence: 99%
“…Both Ma et al [47] and Pandit et al [48] proposed a regularizer that combines group sparsity and T 1 and applied it to CNNs for image classification. Bui et al [49] generalized sparse group lasso to incorporate nonconvex regularizers and applied it to various CNN architectures. Li et al [50] introduced sparsity-inducing matrices into CNNs and imposed group sparsity on the rows or columns via 1 or other nonconvex regularizers to prune filters and/or channels.…”
Section: Introductionmentioning
confidence: 99%