2019
DOI: 10.1109/tcad.2018.2878169
|View full text |Cite
|
Sign up to set email alerts
|

Predicting ${X}$ -Sensitivity of Circuit-Inputs on Test-Coverage: A Machine-Learning Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 31 publications
0
9
0
Order By: Relevance
“…For each choice of the length of PRPG, test‐cost is predicted, and the length corresponding to the minimum cost is selected. (Li et al, 2017; Pradhan et al, 2019).…”
Section: Atpg Test Cost and Testability Issuesmentioning
confidence: 99%
See 2 more Smart Citations
“…For each choice of the length of PRPG, test‐cost is predicted, and the length corresponding to the minimum cost is selected. (Li et al, 2017; Pradhan et al, 2019).…”
Section: Atpg Test Cost and Testability Issuesmentioning
confidence: 99%
“…Moreover, in the post‐silicon validation phase, lot of design bugs that are identified exhibit themselves as X ‐values. X ‐sensitivity, that is, the effect of X on the loss of fault‐coverage in digital circuits has been studied in Pradhan et al (2019). A fast and effective method for predicting X ‐sensitivity of inputs in a digital circuit has been proposed.…”
Section: Atpg Test Cost and Testability Issuesmentioning
confidence: 99%
See 1 more Smart Citation
“…This study focuses on ε-SVR model. This method uses a concept of ε-insensitive loss function where the training data has at most ε deviation from targets y i [38]- [42]. Loss function calculates the distance between the observed value y and the ε boundary by treating those error equal to zero that lie within ε distance of the observed value.…”
Section: A Support Vector Regression (Svr)mentioning
confidence: 99%
“…Now, the best estimated function f (x) or the optimal solution of the convex optimization can be derived by solving (5) and (6) using the famous Sequential Minimal Optimization (SMO) algorithm expressed by (8). Here, K(•) denotes the kernel function, x j is the support vector, n is the number of support vectors, the term in bracket represents weight coefficient of support vectors [41]- [42], and b is the bias.…”
Section: A Support Vector Regression (Svr)mentioning
confidence: 99%