Precise segmentation is vital for successful diagnosis and treatment planning. Medical image segmentation has demonstrated remarkable advances with the introduction of deep convolutional neural networks, particularly encoder-decoder networks such as U-Net. Despite their excellent performances, these methods have some limitations. First, the structure is limited in its ability to combine information because feature maps to extract valid information from the final encoding stage are incompatible at the encoding and decoding levels. Second, the approach ignores significant semantic details and does not consider different types of small-scale contextual information when segmenting medical images. Lastly, most methods employing 3D architectures to process input medical images increase the computational complexity of the model without significantly improving the accuracy. To resolve these issues, we propose a segmentation network called Multi-Attention Gated Residual U-Net (MAGRes-UNet). This network incorporates four multi-attention gate (MAG) modules and residual blocks into a standard U-Net structure. The MAG module integrates the information from all encoding stages and focuses on small-scale tumors while disambiguating irrelevant and noisy feature responses, thereby promoting meaningful contextual information. The residual blocks simplify the network training and mitigate the problem of vanishing gradients. This improves the ability of the network to effectively learn intricate features and deep representations. Moreover, our network employs the Mish and ReLU activation functions (AFs), which utilize AdamW and Adam optimization strategies to achieve enhanced segmentation performance. The proposed MAGRes-UNet method was compared with the U-Net, Multi-Attention Gated-UNet (MAG-UNet), and Residual-UNet (ResUNet) models. In addition, a statistical T-test was performed to assess the difference in model significance between the approaches. The analysis revealed that MAGRes-UNet employing Mish and AdamW provides significant performance improvement over the ReLU AF and Adam optimizer on two benchmark datasets: Multi-Class BT T1-weighted Contrast-Enhanced Magnetic Resonance Imaging (T1-CE-MRI) and skin lesions HAM10000 (Human Against Machine with 10,000 training images). MAGRes-UNet using Mish and AdamW provides competitive performance over the representative medical image segmentation methods.