Limited by visual percepts elicited by existing visual prothesis, it's necessary to enhance its functionality to fulfill some challenging tasks for the blind such as obstacle avoidance. This paper provides a new methodology for obstacle avoidance in simulated prosthetic vision by modelling and classifying spatiotemporal (ST) video data. The proposed methodology is based on a novel spiking neural network architecture, called NeuCube as a general framework for video data modelling in simulated prosthetic vision. As an integrated environment including spiking trains encoding, input variable mapping, unsupervised reservoir training and supervised classifier training, the NeuCube consists of a spiking neural network reservoir (SNNr) and a dynamic evolving spiking neural network classifier (deSNN). First input is captured by visual prothesis, then ST feature extraction is utilized in the low-resolution prosthetic vision generated by the prothesis. Finally such ST features are fed to the NeuCube to output classification result of obstacle analysis. Experiments on collected video data and comparison with other computational intelligence methods indicate promising results. The proposed NeuCube-based obstacle avoidance methodology provides useful guidance to the blind, thus improving the current prothesis and hopefully benefiting the future prothesis wearers.