At present, most iris segmentation methods based on deep learning have poor robustness in the face of iris images collected in non-cooperative environments (with partial occlusion, distortion, etc.), and their performance decreases to varying degrees. Inspired by the visual transformer (ViT), we combined the advantages of the ViT and ConvNeXts networks to propose a deep learning-based robust iris segmentation method called CINet. Specifically, we introduced global region aware (GRA) in ConvNeXts to capture global spatial information. It increases the sensitivity of the model to the inner and outer boundaries of the iris, achieving efficient iris segmentation. In addition, it also suppresses noise information irrelevant to the iris region, thus improving the robustness of the model. We used global channel normalization instead of batch normalization, which suppresses some unimportant channel information, further enhancing the network's performance. Experimental results demonstrate that GRA provides important feature information, which is essential for efficient iris segmentation. We verify the effectiveness of the proposed method on three benchmark iris datasets.