2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.94
|View full text |Cite
|
Sign up to set email alerts
|

Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid

Abstract: Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains. It has however been shown that they can be fooled by adversarial examples, i.e., images altered by a barely-perceivable adversarial noise, carefully crafted to mislead classification. In this work, we aim to evaluate the extent to which robot-vision systems embodying deeplearning algorithms are vulnerable to adversarial examples, and propose a computationally efficient countermeasur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
80
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 72 publications
(82 citation statements)
references
References 27 publications
0
80
0
Order By: Relevance
“…Szegedy et al [51] first showed that neural networks are vulnerable to small perturbations on inputs. It has since been shown that such examples can be exploited to attack machine learning systems in safety-critical applications such as autonomous robotics [36] and malware classication [19].…”
Section: Related Workmentioning
confidence: 99%
“…Szegedy et al [51] first showed that neural networks are vulnerable to small perturbations on inputs. It has since been shown that such examples can be exploited to attack machine learning systems in safety-critical applications such as autonomous robotics [36] and malware classication [19].…”
Section: Related Workmentioning
confidence: 99%
“…Even if the majority of approaches implementing rejection or abstaining classifiers have not considered the problem of defending against adversarial examples, some recent work has explored this direction too [12], [13]. Nevertheless, with respect to the approach proposed in this work, they have only considered the output of the last network layer and perform rejection based solely on that specific feature representation.…”
Section: B Experimental Resultsmentioning
confidence: 99%
“…In particular, Bendale and Boult [13] have proposed a rejection mechanism based on reducing the open-set risk in the feature space of the activation vectors extracted from the last layer of the network, while Melis et.al. [12] have applied a threshold on the output of an RBF SVM classifier. Despite these differences, the rationale of the two approaches is quite similar and resembles the older idea of distance-based rejection.…”
Section: B Experimental Resultsmentioning
confidence: 99%
“…Unlike the traditional computing algorithms, ML algorithms dynamically change the computational flow with respect to the input data, which increases the energy overhead. Moreover, due to the unpredictability of the computing in hidden layers of neural networks, these algorithms possess several security vulnerabilities which result in increased system vulnerability towards security threats [8]- [11]. Some example are: Amazon echo hacking [8], Facebook chatbots [8], selfdriving bus crashes (on its very first day in Las Vegas) [13].…”
Section: Fig 2: Applications Of Machine Learning Algorithms (Source mentioning
confidence: 99%