Colorectal cancer is a severe health issue globally and a significant cause of cancer-related mortality, but it is treatable if found at an early stage. Early detection is usually done through a colonoscopy, where clinicians search for cancer precursors called polyps. Research has shown that clinicians miss between 14% and 30% of polyps during standard screenings of the gastrointestinal tract. Furthermore, once the polyps have been found, clinicians often overestimate the size of the polyps. In this respect, automatic analysis of medical images for detecting and locating polyps is a research area where machine learning has excelled in recent years. Still, current models have much room for improvement. In this paper, we propose a novel approach based on learning to segment within several grids, which we introduce to U-Net and Pix2Pix architectures. In short, we have experimented using several grid sizes, and using two opensource polyp segmentation datasets for cross-data training and testing. Our results suggest that segmentation at lower resolutions produces better results at the cost of less precision, which proved useful for the cases where higher precision segmentations gave limited results. Generally, compared to traditional U-Net and Pix2Pix, our grid-based approaches improve segmentation performance.