The vulnerability of deep-learning based face-recognition (FR) methods, to presentation attacks (PA) is studied in this paper. Recently proposed FR methods based on deep neural networks (DNN) have been shown to outperform most other methods by a significant margin. In a trustworthy face-verification system, however, maximizing recognition-performance alone is not sufficient -the system should also be capable of resisting various kinds of attacks, including presentation-attacks (PA). Previous experience has shown that the PA-vulnerability of FR systems tends to increase with face-verification accuracy. Using several publicly available PA datasets, we show that DNN-based FR systems compensate for variability between bona fide and PA samples, and tend to score them similarly, which makes such FR systems extremely vulnerable to PAs. Experiments show the vulnerability of the studied DNN-based FR systems to be consistently higher than 90%, and often higher than 98%.