2022
DOI: 10.1360/sst-2021-0088
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic sparsity and model feature learning enhanced training for convolutional neural network-pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…As the last layer of one-dimensional convolutional neural network [17], the SoftMax layer is mainly used to output the probability of trade volume of different Sino-Korean cultural products and predict the trade situation of Sino-Korean cultural products, as shown in Equation ( 10):…”
Section: Construction Of One-dimensional Convolutional Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…As the last layer of one-dimensional convolutional neural network [17], the SoftMax layer is mainly used to output the probability of trade volume of different Sino-Korean cultural products and predict the trade situation of Sino-Korean cultural products, as shown in Equation ( 10):…”
Section: Construction Of One-dimensional Convolutional Neural Networkmentioning
confidence: 99%
“…(2) Due to the influence of various factors and the interaction between them, the changes in trade in cultural products are relatively random, resulting in significant differences between the historical data of trade in cultural products. Therefore, the historical data of trade in cultural products are standardized according to Equation (17) to keep the trade in cultural products within the range of   0,1 . (3) Some data are selected from the preprocessed historical data of Sino-Korean cultural product trade to form a training sample, and the remaining historical data of Sino-Korean cultural product trade amount to form a test sample.…”
Section: Prediction Processmentioning
confidence: 99%
“…The calculation and data flow in FPGA can be uniformly allocated through various instruction segments. The length L of each segment can be macro-defined according according to (13). N i represents the number of species included in the i-th segment.…”
Section: Folding Designmentioning
confidence: 99%
“…Additionally, quantization [13] can be used to reduce model sizes and hardware resource consumption, such as replacing original 32-bit floating-point operations with lower precision fixed-point numbers like 8-bit or 16-bit. Jacob et al [15] proposed an integer quantization method, which uniformly quantizes both weight and activation to 8-bit.…”
Section: Introductionmentioning
confidence: 99%
“…Reducing the model network structure to achieve lightweight is the main means to improve the detection speed, which can be roughly divided into two technical routes: lightweight network design [19] and network pruning [20] according to different implementation methods. Starting from lightweight network design and combining the practical application scenarios of fruit and vegetable recognition, this chapter proposes a classification model based on depth-wise separable neural network, and compares the network model capability with some classical lightweight network models.…”
Section: Fruit and Vegetable Classification Model Based On Depth-wise...mentioning
confidence: 99%