2018 IEEE International Conference on Data Mining (ICDM) 2018
DOI: 10.1109/icdm.2018.00166
|View full text |Cite
|
Sign up to set email alerts
|

Multi-label Adversarial Perturbations

Abstract: Adversarial examples are delicately perturbed inputs, which aim to mislead machine learning models towards incorrect outputs. While most of the existing work focuses on generating adversarial perturbations in multi-class classification problems, many real-world applications fall into the multi-label setting in which one instance could be associated with more than one label. For example, a spammer may generate adversarial spams with malicious advertising while maintaining the other labels such as topic labels u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
37
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(38 citation statements)
references
References 34 publications
1
37
0
Order By: Relevance
“…Since the number of variables will be usually larger than the number of equalities, the system will be underdetermined. We can then still solve this set of equalities using the pseudo-inverse technique as applied in [10].…”
Section: A Heuristics To Solve the Inner Qp Problemmentioning
confidence: 99%
See 2 more Smart Citations
“…Since the number of variables will be usually larger than the number of equalities, the system will be underdetermined. We can then still solve this set of equalities using the pseudo-inverse technique as applied in [10].…”
Section: A Heuristics To Solve the Inner Qp Problemmentioning
confidence: 99%
“…In fact, there might not even be such an intersection. Still, we include this algorithm for comparison with [10]. Also this algorithm could potentially be fast (ignoring pathological corner cases), since in every iteration we try to solve every constraint.…”
Section: A Heuristics To Solve the Inner Qp Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…This section shows the calculation of the weights of input neurons x 1 to x 4 using Eqs. (23)- (36).…”
Section: B Backpropagation For Hidden and Input Layersmentioning
confidence: 99%
“…Jung and Tewari proposed an approach for label ranking based on voting of the best learners and scoring the labels for ranking [23]. Song and Huang proposed a framework to solve the vulnerability of multi-label deep learning models [36]. Yan and Wang proposed a long short term memory (LSTM) based multi-label ranking model for document classification to identify the relation between labels [37].…”
Section: Introductionmentioning
confidence: 99%