2017 International Joint Conference on Neural Networks (IJCNN) 2017
DOI: 10.1109/ijcnn.2017.7965897
|View full text |Cite
|
Sign up to set email alerts
|

Mitigating fooling with competitive overcomplete output layer neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
27
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 28 publications
(28 citation statements)
references
References 9 publications
0
27
1
Order By: Relevance
“…Fooling of each target digit was attempted by performing stochastic gradient descent on the parameters of the FGN to minimize the cross-entropy between the output of the network to be fooled and the specific desired target output class. Fooling of each digit was attempted for 20 trials with different random inputs to the FGN, each trial consisting of up to 10000 parameter updates, as described in [7].…”
Section: Foolingmentioning
confidence: 99%
See 4 more Smart Citations
“…Fooling of each target digit was attempted by performing stochastic gradient descent on the parameters of the FGN to minimize the cross-entropy between the output of the network to be fooled and the specific desired target output class. Fooling of each digit was attempted for 20 trials with different random inputs to the FGN, each trial consisting of up to 10000 parameter updates, as described in [7].…”
Section: Foolingmentioning
confidence: 99%
“…All the models were trained for a fixed 100 epochs. Fooling was attempted at two different thresholds, 90% and 99%, in contrast to the previous work that used only the 99% one [7]. Comparing the models at different thresholds indeed gives more information about their robustness and may amplify their differences, thus improving the comparison.…”
Section: Foolingmentioning
confidence: 99%
See 3 more Smart Citations