Machine learning applications are steadily increasing in performance, while also being deployed on a growing number of devices with limited energy resources. To minimize this trade-off, researchers are continually looking for more energy efficient solutions. A promising field involves the use of spiking neural networks in combination with neuromorphic hardware, significantly reducing energy consumption since energy is only consumed as information is being processed. However, as their learning algorithms lag behind conventional neural networks trained with backpropagation, not many applications can be found today. The highest levels of accuracy can be achieved by converting networks that are trained with backpropagation to spiking networks. Spiking neural networks can show nearly the same performance in fully connected and convolutional networks. The conversion of recurrent networks has been shown to be challenging. However, recent progress with transformer networks could change this. This type of network not only consists of modules that can easily be converted, but also shows the best accuracy levels for different machine learning tasks. In this work, we present a method to convert the transformer architecture to networks of spiking neurons. With only minimal conversion loss, our approach can be used for processing sequential data with very high accuracy while offering the possibility of reductions in energy consumption.