2017 IEEE Third International Conference on Multimedia Big Data (BigMM) 2017
DOI: 10.1109/bigmm.2017.64
|View full text |Cite
|
Sign up to set email alerts
|

Fooling Neural Networks in Face Attractiveness Evaluation: Adversarial Examples with High Attractiveness Score But Low Subjective Score

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…5. Similarly, Shen et al [144] proposed two different techniques to generate adversarial examples for faces that can have high 'attractiveness scores' but low 'subjective scores' for the face attractiveness evaluation using deep neural network. We refer to [185] for further attacks related to the task of face recognition.…”
Section: Attacks On Face Attributesmentioning
confidence: 99%
“…5. Similarly, Shen et al [144] proposed two different techniques to generate adversarial examples for faces that can have high 'attractiveness scores' but low 'subjective scores' for the face attractiveness evaluation using deep neural network. We refer to [185] for further attacks related to the task of face recognition.…”
Section: Attacks On Face Attributesmentioning
confidence: 99%
“… 14 , 15 , 16 , 17 , 18 , 19 , 20 There have been reports on makeup, for example, facial attractiveness evaluation, automatic makeup generation, makeup pattern suggestion. 21 , 22 , 23 , 24 We hypothesized that deep learning technology could be used to obtain a makeup finish evaluation technology that can evaluate subtle textures as well as human visual evaluation.…”
Section: Introductionmentioning
confidence: 99%
“…Besides, systems for detecting malicious information [39]- [41] are also under the threat of adversarial examples. Therefore, researchers have paid much attention to the security problem caused by adversarial examples [42], [43]. Numerous works study the adversarial attacks and defenses, aiming at exploring what adversarial examples are [12], [44]- [46], why they exist, how they infect the behavior of DNN models, and how to solve this security problem.…”
Section: Introductionmentioning
confidence: 99%