Spiking neural network (SNN) is broadly deployed in neuromorphic devices to emulate the brain function. In this context, SNN security becomes important while lacking in-depth investigation, unlike the hot wave in deep learning. To this end, we target the adversarial attack against SNNs and identify several challenges distinct from the ANN attack: i) current adversarial attack is based on gradient information that presents in a spatio-temporal pattern in SNNs, hard to obtain with conventional learning algorithms; ii) the continuous gradient of the input is incompatible with the binary spiking input during gradient accumulation, hindering the generation of spike-based adversarial examples; iii) the input gradient can be all-zeros (i.e. vanishing) sometimes due to the zero-dominant derivative of the firing function, prone to interrupt the example update.Recently, backpropagation through time (BPTT)-inspired learning algorithms are widely introduced into SNNs to improve the performance, which brings the possibility to attack the models accurately given spatio-temporal gradient maps. We propose two approaches to address the above challenges of gradient-input incompatibility and gradient vanishing. Specifically, we design a gradient-to-spike (G2S) converter to convert continuous gradients to ternary ones compatible with spike inputs. Then, we design a gradient trigger (GT) to construct ternary gradients that can randomly flip the spike inputs with a controllable turnover rate, when meeting all-zero gradients. Putting these methods together, we build an adversarial attack methodology for SNNs trained by supervised algorithms. Moreover, we analyze the influence of the training loss function and the firing threshold of the penultimate layer, which indicates a "trap" region under the cross-entropy loss that can be escaped by threshold tuning. Extensive experiments are conducted to validate the effectiveness of our solution, showing 99%+ attack success rate on most benchmarks, which is the best result in SNN attack. Besides the quantitative analysis of the influence factors, we evidence that SNNs are more robust against adversarial attack than ANNs. This work can help reveal what happens in SNN attack and might stimulate more research on the security of SNN models and neuromorphic devices.