The colorectal cancer (CRC) is gaining attention in the context of gastrointestinal tract diseases as it ranks third among the most prevalent type of cancer. The early diagnosis of the CRC can be done by periodic examination of the colon and rectum for innocuous tissue abnormality called polyp as it has the potential to evolve as malignant in future. The CRC diagnosis using wireless capsule endoscopy requires the dedicated commitment of the medical expert demanding significant time, focus and effort. The accuracy of manual analysis in identifying polyps is extensively reliant on the cognitive condition of the physician, thus emphasizing the requirement for automatic polyp identification. The artificial intelligence integrated computer‐aided diagnosis system could assist the clinician in better diagnosis, thereby reducing the miss‐rates of polyps. In our proposed study, we developed an application program interface to aid the clinician in automatic segmentation of the polyp and evaluation of its dimension by manual placement of four landmarks on the predicted polyp. The segmentation is performed by the proposed light weight Padded U‐Net for the effective polyp segmentation in the colorectal images. We trained and validated the Padded U‐Net with augmented images of Kvasir dataset and calculated the performance parameters. In order to facilitate image augmentation, a graphical user interface called Augment Tree was developed, which incorporates 92 augmentation techniques. The accuracy, recall, precision, IoU, F1‐score, loss achieved during validation of Padded U‐Net were 95.6%, 0.946%, 0.985%, 0.933%, 0.965% and 0.080% respectively. We demonstrated that accuracy was improved and loss was reduced when the model was trained with augmented images rather than only the limited original dataset images. On comparison of our Padded U‐net architecture with recently developed architectures, our model attained optimal performance in all the metrics except accuracy in which it attained marginal performance to the highest value.