Iris segmentation is a significant phase in the iris recognition process because segmentation errors cascade into all subsequent phases. Therefore, it is important that errors in iris segmentation are minimised. The U-Net architecture that uses a deep learning approach was previously adopted for this task, but its performance was affected by the deformation of iris images caused by various noise factors in unconstrained (non-ideal) environments. Scratches, blurriness, dirt, specular reflections and other noise factors are some of the challenges faced in unconstrained environments when eyeglasses are present in the original images. Additionally, the performance of the iris segmentation was degraded due to problems of exploding gradient or vanishing gradient and the loss of information. This paper proposes a multisegmentation network called MS-Net, based on a deep learning approach, that aims to capture high-level semantic features while maintaining spatial information to improve the accuracy of iris segmentation. MS-Net consists of three principal segments: a feature encoder network, a multi-scale context feature extractor network (MSCFE-Net) and a feature decoder network. MSCFE-Net a multi-scale context feature extractor network is constructed from a dilated residual multi-convolutional network module and a pyramid pooling residual model based on an attention convolutional module. In addition, the proposed MS-Net contains dense connections within the feature decoder network to decrease training difficulty, by using only a few training samples. The accuracy of MS-Net was evaluated on the CASIA-Iris.V4-1000 and UBIRIS.V2 databases. The performance of our proposed MS-Net method on the CASIA-Iris.V4-1000 and UBIRIS.V2 databases achieved an overall accuracy of 97.11% and 96.128%, respectively. Experiment results show that MS-Net is able to achieve better results compared to earlier methods used for the same purpose.