2018
DOI: 10.48550/arxiv.1808.02651
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer

Hsueh-Ti Derek Liu,
Michael Tao,
Chun-Liang Li
et al.

Abstract: Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 30 publications
(44 reference statements)
0
5
0
Order By: Relevance
“…These methods aim to create perturbations that are barely detectable by human observers, effectively deceiving deep learning models. Furthermore, researchers have delved into altering the physical parameters of digital images [28,29], focusing on retaining only the essential components necessary for generating adversarial samples. By carefully manipulating these parameters, they can introduce subtle distortions that can mislead deep neural networks.…”
Section: Digital Attacksmentioning
confidence: 99%
“…These methods aim to create perturbations that are barely detectable by human observers, effectively deceiving deep learning models. Furthermore, researchers have delved into altering the physical parameters of digital images [28,29], focusing on retaining only the essential components necessary for generating adversarial samples. By carefully manipulating these parameters, they can introduce subtle distortions that can mislead deep neural networks.…”
Section: Digital Attacksmentioning
confidence: 99%
“…For instance, Tsai et al [30] perturbs the position of point clouds to generate an adversarial mesh that fools 3D shape classifiers. Ti et al [15] generate adversarial attacks by modeling the pixels in natural images as an interaction result of lighting condition and the physical scene, such that the pixels can maintain their natural appearance. More recently, Xiao et al [34] and Zeng et al [37] generate adversarial samples by altering the physical parameters (e.g.…”
Section: Mesh Adversarial Attacksmentioning
confidence: 99%
“…As a result, there is an increasing interest in addressing challenges that arise from natural corruptions or perturbations (Hendrycks and Dietterich 2018) that are perceptible shifts in the data, more likely to be encountered in the real world. For example, (Liu et al 2018) use a differentiable renderer to design adversarial perturbations sensitive to semantic concepts like lighting and geometry in a scene; (Joshi et al 2019) design perturbations only along certain pre-specified attributes by optimizing over the range-space of a conditional generator. Our work focuses on building robust models against semantic, or more generally attribute guided concepts that may or may not exist in the training distribution, using a surrogate function.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, translating a digit inside an image in a digit classification task, or manipulating the shape of an object in a color classification task, will not result in a change in the true class-label. Yet, perturbations along these attributes are likely to cause models to fail when they are changed intentionally or otherwise (Xiao et al 2020;Joshi et al 2019;Liu et al 2018). Shifts in such "nuisance attributes" typically result in large p perturbations, posing significant challenges for existing pixel-level perturbation models.…”
Section: Introductionmentioning
confidence: 99%