Most recent speaker verification systems are based on the extraction of speaker embeddings using a deep neural network. The pooling layer in the network aims to aggregate framelevel features extracted by the backbone. In this paper, we propose a new transformer based pooling structure called Po-Former to enhance the ability of the pooling layer to capture information along the whole time axis. Different from previous works that apply attention mechanism in a simple way or implement the multi-head mechanism in serial instead of in parallel, PoFormer follows the initial transformer structure with some minor modifications like a positional encoding generator, drop path and LayerScale to make the training procedure more stable and to prevent overfitting. Evaluated on various datasets, PoFormer outperforms the existing pooling system with at least a 13.00% improvement in EER and a 9.12% improvement in minDCF.