2020
DOI: 10.1155/2020/2940286
|View full text |Cite
|
Sign up to set email alerts
|

An Electronic Component Recognition Algorithm Based on Deep Learning with a Faster SqueezeNet

Abstract: Electronic component recognition plays an important role in industrial production, electronic manufacturing, and testing. In order to address the problem of the low recognition recall and accuracy of traditional image recognition technologies (such as principal component analysis (PCA) and support vector machine (SVM)), this paper selects multiple deep learning networks for testing and optimizes the SqueezeNet network. The paper then presents an electronic component recognition algorithm based on the Faster Sq… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 26 publications
(14 citation statements)
references
References 27 publications
0
14
0
Order By: Relevance
“…Additionally, CNN could be trained end to end for the selection and extraction of feature images and, finally, could be utilized for predicting or classifying the image (Muhammad et al 2021). Since the number of parameters for VGGNet and AlexNet is increasing, the SqueezeNet network method was projected to have a low number of parameters when maintaining accuracy (Xu et al 2020).…”
Section: Network Architecturementioning
confidence: 99%
“…Additionally, CNN could be trained end to end for the selection and extraction of feature images and, finally, could be utilized for predicting or classifying the image (Muhammad et al 2021). Since the number of parameters for VGGNet and AlexNet is increasing, the SqueezeNet network method was projected to have a low number of parameters when maintaining accuracy (Xu et al 2020).…”
Section: Network Architecturementioning
confidence: 99%
“…. , l − 1, and H l concatenates multiple inputs [16]. Without extremely enhancing the number of network variables, the efficiency of the network has improved from initial phases; simultaneously, some two-layer network is directly connected data.…”
Section: Feature Extraction: Optimal Faster Squeezenet Modelmentioning
confidence: 99%
“…x t+1 i,j = x t i,j + x t k,j − x t i,j × FL × rand (0, 1), (16) where FL (FL ∈ [0, 2]) denotes the scrounger who will follow the producer in searching for food. The BSA approach develops an FF for attaining enhanced classifier efficiency.…”
mentioning
confidence: 99%
“…Among the deep learning algorithms, the bidirectional gated recurrent network (BiGRU) is a type of the bidirectional recurrent neural network model that can fully express the relationship between the current output of a sequence and previous information [17]. However, the characteristic dimension of the amino acid time series is too high, and the BiGRU is directly used to process the amino acid sequence parameters, which results in low efficiency.…”
Section: Introductionmentioning
confidence: 99%