2019
DOI: 10.48550/arxiv.1903.08789
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Interpreting Neural Networks Using Flip Points

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…Moving the decision boundaries away from the training data also tends to improve the generalization of deep learning models as reported by Elsayed et al [3] and Yousefzadeh and O'Leary [35].…”
Section: Debugging a Modelmentioning
confidence: 84%
See 3 more Smart Citations
“…Moving the decision boundaries away from the training data also tends to improve the generalization of deep learning models as reported by Elsayed et al [3] and Yousefzadeh and O'Leary [35].…”
Section: Debugging a Modelmentioning
confidence: 84%
“…Here, we demonstrate our techniques for explaining, auditing, and debugging deep learning models on three different datasets with societal themes. We use three software packages, NLopt [10], IPOPT [30], and the Optimization Toolbox of MATLAB, as well as our own custom-designed homotopy algorithm [35], to solve the optimization problems. The algorithms almost always converge to the same point.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the two methods require a lot of pixels to be changed. Yousefzadeh and O'Leary Yousefzadeh and O'Leary [2019] reduced the number of pixels using flip points. It is al possible to deceive a neural network classifier with only one pixel change Su et al [2019].…”
Section: Adversarial Examplesmentioning
confidence: 99%