2023
DOI: 10.1016/j.isatra.2022.10.031
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive multiscale and dual subnet convolutional auto-encoder for intermittent fault detection of analog circuits in noise environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 29 publications
0
5
0
Order By: Relevance
“…The data were randomly selected for testing. After completing the training set partitioning, the model enters the pre-training phase, and the main parameters of ECWGEO are set as follows: the number of filters in the first convolutional layer , the number of filters in the second convolutional layer (64-256), the size of the convolutional kernel (2)(3)(4)(5)(6)(7)(8), the learning rate (0.0001-0.1) and dropout rate (0-0.5), and the population size is set to 30 and maximum iteration is set to 100. Twenty epochs are used for model training, the batch size is 32, and the fitness function is set as shown in Equation (27).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The data were randomly selected for testing. After completing the training set partitioning, the model enters the pre-training phase, and the main parameters of ECWGEO are set as follows: the number of filters in the first convolutional layer , the number of filters in the second convolutional layer (64-256), the size of the convolutional kernel (2)(3)(4)(5)(6)(7)(8), the learning rate (0.0001-0.1) and dropout rate (0-0.5), and the population size is set to 30 and maximum iteration is set to 100. Twenty epochs are used for model training, the batch size is 32, and the fitness function is set as shown in Equation (27).…”
Section: Methodsmentioning
confidence: 99%
“…is the normal vector and the hyperplane in which the cruise vector is located and can be computed using Equation (5).…”
Section: Exploitation and Explorationmentioning
confidence: 99%
See 1 more Smart Citation
“…Unsupervised learning means that in the training stage, the training dataset of the model only contains input features, without corresponding output tags. Auto-Encoder (AE) [40,41] is an unsupervised neural network that can extract features to reconstruct data. Its basic structure consists of an encoder and decoder.…”
Section: Data Cleaningmentioning
confidence: 99%
“…Fang et al proposed an adaptive multiscale and dual subnet convolutional auto-encoder (AMDSCAE) to detect intermittent faults in analog circuits in noisy environments. Although the proposed AMDSCAE achieves high accuracy in the detection of intermittent faults in analog circuits, it is slightly more expensive than other networks in terms of computational cost [ 20 ]. Gao et al proposed an automatic fault detection method for seismic images based on a novel multiscale attention convolutional neural network (MACNN), which can effectively extract fault features and automate fault detection by introducing a multiscale attentional mechanism [ 21 ].…”
Section: Introductionmentioning
confidence: 99%