2017
DOI: 10.48550/arxiv.1707.05373
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Houdini: Fooling Deep Structured Prediction Models

Abstract: Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
96
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 63 publications
(96 citation statements)
references
References 26 publications
0
96
0
Order By: Relevance
“…Furthermore, knowing the architecture and parameters of a network could make easier for a malicious user to attack it, for instance with adversarial attacks. Indeed, if some black-box adversarial attacks do exist [54,50,13], many of them use the knowledge of the parameters of the network, at least to compute the gradients [56,21,28,41,10,39,38,5].…”
Section: Motivations: Privacy Robustness and Interpretabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, knowing the architecture and parameters of a network could make easier for a malicious user to attack it, for instance with adversarial attacks. Indeed, if some black-box adversarial attacks do exist [54,50,13], many of them use the knowledge of the parameters of the network, at least to compute the gradients [56,21,28,41,10,39,38,5].…”
Section: Motivations: Privacy Robustness and Interpretabilitymentioning
confidence: 99%
“…Equation (13) with λ k = ½ n k for all k ∈ 0, K is precisely equation ( 16). The reciprocal is clear: ( 16) is a particular case of ( 13) with λ k = ½ n k .…”
Section: • We Prove By Induction the Expression Of Fkmentioning
confidence: 99%
“…In addition to high WER, the ASR model should predict a specific mistranscription target. Targeted attack is much more challenging [13,16] than untargeted attack.…”
Section: Introductionmentioning
confidence: 99%
“…Yuan et al [10] used music as a carrier to hide speech commands in music. Cisse et al [11] proposed a more flexible attack method, which can be applied to different models. To attack the end-to-end ASR model, this method needed to obtain the loss of the target command and the current prediction result, and then find the adversarial example through optimization.…”
Section: Introductionmentioning
confidence: 99%