2020
DOI: 10.1007/978-3-030-61609-0_1
|View full text |Cite
|
Sign up to set email alerts
|

On the Security Relevance of Initial Weights in Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 18 publications
0
3
0
Order By: Relevance
“…For the sake of completeness, we conclude with a description of additional, recent attacks, some of which are part of our questionnaires (Appendix D.3). In adversarial initialization, the initial weights of a neural network 1 are targeted to harm convergence or accuracy during training [32,52]. In adversarial reprogramming, an input perturbation mask forces the classifier at test time to perform another classification task than originally intended [27].…”
Section: Adversarial Machine Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…For the sake of completeness, we conclude with a description of additional, recent attacks, some of which are part of our questionnaires (Appendix D.3). In adversarial initialization, the initial weights of a neural network 1 are targeted to harm convergence or accuracy during training [32,52]. In adversarial reprogramming, an input perturbation mask forces the classifier at test time to perform another classification task than originally intended [27].…”
Section: Adversarial Machine Learningmentioning
confidence: 99%
“…We also investigated the familiarity of our subjects with AML attacks.To avoid priming, we asked subjects to rate their familiarity after the interview. As sanity checks, we added two rather unknown terms, adversarial initialization [32] and neural trojans [54] (similar to backdoors). The results are depicted in Figure 8.…”
Section: B Subjects Prior Knowledge On Amlmentioning
confidence: 99%
See 1 more Smart Citation