In this paper, we discuss our proposed unified approach to iris recognition called the SwinIris. This approach works efficiently when combing a tuned version of the original Swin Transformer with a set of iris recognition processes. While the original Swin Transformer has recently been used on different biometric modalities due to its competitive advantage over other architectures, it has not been used for iris recognition. Thus, in our work, our proposed deep learning architecture is using a pre-trained Swin Transformer model that is fine-tuned with a set of linear layers. This model is a set of algorithmic aspects that achieve competitive iris-matching accuracy when using any of the selected iris databases selected to test our approach. Specifically, our proposed SwinIris Transformer-based iris recognition system is composed of four modules, namely, the eye detection, iris detection, iris segmentation, and iris classification modules. The proposed system commences with a detection process that aims to identify eyes within the cameracaptured original iris images. Subsequently, the iris detection process identifies the iris patterns within the detected iris images to ensure that they are present and can be segmented. The third module involves iris segmentation, which extracts iris features utilized by the fourth and final module that matches iris images using the SwinIris-based Transformer model. The performance of the proposed system is evaluated on small-and large-scale iris datasets, including the CASIA-Iris-Thousand, CASIA-Iris-Lamp, CASIA-IntervalV4, and CASIA-IntervalV3. Our proposed model has a competitive iris classification accuracy when compared to various academic state-of-the-art methodologies, resulting in a classification performance ranging from 95.14% to 99.56%.