Medical image segmentation aims to identify important or suspicious regions within medical images. However, many challenges are usually faced while developing networks for this type of analysis. First, preserving the original image resolution is of utmost importance for this task where identifying subtle features or abnormalities can significantly impact the accuracy of diagnosis. The introduction of the dilated convolution module helped preserve resolution in deep convolutional neural networks, but it has a drawback: loss of local spatial resolution due to increased kernel sparsity in checkboard patterns. To address this, in this work, a double-dilated convolution module is proposed to maintain local spatial resolution while achieving a large receptive field. This approach is applied to tumor segmentation in breast cancer mammograms as a proof-of-concept. Additionally, this study tackles the issue of pixel-level class imbalance in mammogram screenings by comparing various loss functions to find the best one for mass segmentation. Our work also addresses the "black-box" nature of the models by performing quantitative and qualitative evaluations of their interpretability using the Gradient weighted Class Activation Map (Grad-CAM) and comparing it with other explainable models for image segmentation. An experimental analysis on lesion segmentation networks is performed on mammogram screenings from the INBreast dataset, both before and after integrating the proposed dilation module into a state-of-the-art deep convolutional neural network. The results demonstrate the effectiveness of the proposed module in terms of Dice similarity and Miss Detection rate for mass segmentation. Our analysis also promotes using the Tversky Loss function in training pixel-imbalanced data and integrating Grad-CAM for explaining image segmentation results.