2019
DOI: 10.1016/j.imavis.2019.06.006
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Relative Squeezing bottleneck design for compact convolutional neural networks model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 3 publications
0
6
0
Order By: Relevance
“…The specific network structure of MobileFaceNet is shown in Table 1, where t denotes the "expansion" multiplier, i.e., the channel expansion multiplier of the reverse residual network, c denotes the number of output channels, n denotes the number of repetitions, and s denotes the step stride. MobileFaceNet continues the bottlenecks in MobileNetV2 [24] and uses it as the main module to build the network, as shown in Figure 1 for the structure of the bottleneck layer in MobileFaceNet. The expansion factor in bottlenecks is also reduced, and PReLU is chosen as the activation function and insight face loss as the loss function.…”
Section: Mobilefacenet Network Structurementioning
confidence: 99%
“…The specific network structure of MobileFaceNet is shown in Table 1, where t denotes the "expansion" multiplier, i.e., the channel expansion multiplier of the reverse residual network, c denotes the number of output channels, n denotes the number of repetitions, and s denotes the step stride. MobileFaceNet continues the bottlenecks in MobileNetV2 [24] and uses it as the main module to build the network, as shown in Figure 1 for the structure of the bottleneck layer in MobileFaceNet. The expansion factor in bottlenecks is also reduced, and PReLU is chosen as the activation function and insight face loss as the loss function.…”
Section: Mobilefacenet Network Structurementioning
confidence: 99%
“…Considering the computation amount of ResNet architectures, the bottleneck block was designed as the residual block to reduce the number of parameters that have little influence on the result (Zhao et al, 2019). Since the output of the network layer close to the input was related to shallow semantics, the larger size and stride of the convolution kernel or pooling layer were set to increase the receptive field, and then, the global features were extracted.…”
Section: Deep Learning Modelsmentioning
confidence: 99%
“…The learned group convolutions find the unimportant input connections in DenseNet and then prune them. Zhao et al [138] proposed a rule for customizing the number of feature maps, introducing a compression layer composed of 1 × 1 convolution, and combining learned group convolutions to achieve parameter reduction. Moreover, Zhu et al [139] proposed SparseNet, in which only log(l) previous output feature maps would concatenate as the l layer's input at any given location in the network.…”
Section: B Shortcut Connectionsmentioning
confidence: 99%