2022
DOI: 10.1109/access.2022.3204995
|View full text |Cite
|
Sign up to set email alerts
|

Strengthening Robustness Under Adversarial Attacks Using Brain Visual Codes

Abstract: The vulnerability of computational models to adversarial examples highlights the differences in the ways humans and machines process visual information. Motivated by human perception invariance in object recognition, we aim to incorporate human brain representations for training a neural network. We propose a multi-modal approach that integrates visual input and the corresponding encoded brain signals to improve the adversarial robustness of the model. We investigate the effects of visual attacks of various st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 60 publications
0
0
0
Order By: Relevance