Abstract:We develop and study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence (AI) systems including deep learning neural networks. In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself. Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team. It could also be made by those wishing to exploit a "democratization … Show more
“…The general consideration of adaptability of individuals and technical complexes also yields useful hints for solving the problem of AGI (Gorban et al, 2021a). Finally, the safety of AI systems should be prioritized without any doubts (Tyukin et al, 2021b).…”
“…The general consideration of adaptability of individuals and technical complexes also yields useful hints for solving the problem of AGI (Gorban et al, 2021a). Finally, the safety of AI systems should be prioritized without any doubts (Tyukin et al, 2021b).…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.