Colorectal cancer is a common gastrointestinal malignancy. Early screening and segmentation of colorectal polyps are of great clinical significance. Colonoscopy is the most effective method to detect polyps, but some polyps may be missed in the detection process. On this basis, the use of computer‐aided diagnosis technology is particularly important for colorectal polyp segmentation. To improve the detection rate of intestinal polyps under colonoscopy, a polyp segmentation network (MobileRaNet) based on a lightweight model and reverse attention (RA) mechanism was proposed to accurately segment polyps in colonoscopy images. The coordinated attention module is used to improve MobileNetV3 and make it the backbone network (CaNet). Second, a part of the output of the high‐level feature from the backbone network is passed into the parallel axial receptive field module (PA_RFB) to extract the global dependency representation without losing the details. Third, a global map is generated based on this combined feature as the initial boot area of the subsequent components. Finally, the RA module is used to mine the target region and boundary clues to improve the segmentation accuracy. To verify the effectiveness and lightweight performance of the algorithm, five challenging datasets, including CVC‐ColonDB, CVC‐300, and Kvasir, are used in this paper. In six indexes, including MeanDice, MeanIoU, and MAE, compared with seven typical models such as PraNet and TransUnet, accuracy, FLOPs, parameters, and FPS were compared. The experimental results show that the MobileRaNet proposed in this paper has improved the performance of the five datasets to varying degrees, especially the MeanDice and MeanIOU indexes of the Kvasir dataset reach 91.2% and 85.6%, which are, respectively, increased by 1.4% and 1.6% compared with PraNet. Compared with PraNet, FLOPs and parameters decreased by 83.3% and 76.7%, respectively.