2020
DOI: 10.3233/jifs-179677
|View full text |Cite
|
Sign up to set email alerts
|

Fault tolerance of neural networks in adversarial settings

Abstract: Artificial Intelligence systems require a through assessment of different pillars of trust, namely, fairness, interpretability, data and model privacy, reliability (safety) and robustness against against adversarial attacks. While these research problems have been extensively studied in isolation, an understanding of the trade-off between different pillars of trust is lacking. To this extent, the trade-off between fault tolerance, privacy and adversarial robustness is evaluated for the specific case of Deep Ne… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…This ensures that model is free from over fitting and it can be concluded that designed CNN model got ability of fault tolerance. Further, the proposed model has local connectivity, shared weights, and spatial pooling which ensures that fault features will not processed and model got good at fault diagnosis [53].…”
Section: Results Analysismentioning
confidence: 99%
“…This ensures that model is free from over fitting and it can be concluded that designed CNN model got ability of fault tolerance. Further, the proposed model has local connectivity, shared weights, and spatial pooling which ensures that fault features will not processed and model got good at fault diagnosis [53].…”
Section: Results Analysismentioning
confidence: 99%
“…This adversary uses a regressor, trained using different execution times along with the respective number of layers in the network. This information is then utilized to mimic substitute models with similar functionalities to the original network 63 . Information about the CNN model can be leaked using reverse engineering of structure and weights with the help of memory and timing side‐channel attacks.…”
Section: The Attack Surface Of Artificial Intelligencementioning
confidence: 99%
“…ey concluded that ensemble predictors may improve performance of fault detection to some degree. Duddu et al [24], in their work, considered the trade-off between adversarial robustness, fault tolerance, and privacy. Two adversarial settings were also considered under the security and privacy threat model.…”
Section: Related Workmentioning
confidence: 99%