In recent times, deep neural networks (DNN) have been extensively used in several areas and have achieved great success. State-of-the-art face recognition (FR) systems have gained high accuracy using DNN. However, researchers have found that DNN-based systems fail when facing an adversarial attack on images. In adversarial attacks, the adversary modifies the face images in a manner that the human does not perceive the changes in the generated image but FR systems are unable to recognize the faces correctly. We proposed a method to generate an adversarial attack. A multistack adversarial network (M-SAN) patch-based attack is generated using the generative adversarial network under the black-box settings. The M-SAN attack targets the features of the face images using a patch to fool the FR model as targeted and untargeted attacks. In the past, several attack generation methods have been presented under white-box settings. However, white-box settings need a model architecture and information about its parameters. Due to this, a single white-box attack is not able to fool different FR models. We propose an attack generation approach that is based on black-box settings in which an attacker does not have access to the target model parameters. The attack is generated with the help of the surrogate model and then transferred to the various target models. The proposed M-SAN attack is applied to FR models including FaceNet, ArcFace, and CosFace on the labeled face in the wild dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.