In this paper, we study the vulnerability of antispoofing methods based on deep learning against adversarial perturbations. We first show that attacking a CNN-based antispoofing face authentication system turns out to be a difficult task. When a spoofed face image is attacked in the physical world, in fact, the attack has not only to remove the rebroadcast artefacts present in the image, but it has also to take into account that the attacked image will be recaptured again and then compensate for the distortions that will be re-introduced after the attack by the subsequent rebroadcast process. Subsequently, we propose a method to craft robust physical domain adversarial images against anti-spoofing CNN-based face authentication. The attack built in this way can successfully pass all the steps in the authentication chain (that is, face detection, face recognition and spoofing detection), by achieving simultaneously the following goals: i) make the spoofing detection fail; ii) let the facial region be detected as a face and iii) recognized as belonging to the victim of the attack. The effectiveness of the proposed attack is validated experimentally within a realistic setting, by considering the REPLAY-MOBILE database, and by feeding the adversarial images to a real face authentication system capturing the input images through a mobile phone camera.