Transformers have emerged as the leading methods in natural language processing, computer vision, and multi-modal applications due to their ability to capture complex relationships and dependencies in data. In this study, we explore the potential of transformers as feature aggregators in the context of patch-based writer retrieval, with the objective of improving the quality of writer retrieval by effectively summarizing the relevant features from image patches. Our investigation underscores the complexity of leveraging transformers as feature aggregators in patch-based writer retrieval. While we have experimented with various model configurations, augmentations, and learning objectives, the performance of transformers in this task has room for improvement. This observation highlights the challenges in this domain and emphasizes the need for further research to enhance their effectiveness. By shedding light on the limitations of transformers in this context, our study contributes to the growing body of knowledge in the field of writer retrieval and provides valuable insights for future research and development in this area.