Purpose
Breast mass segmentation in mammograms remains a crucial yet challenging topic in computer‐aided diagnosis systems. Existing algorithms mainly used mass‐centered patches to achieve mass segmentation, which is time‐consuming and unstable in clinical diagnosis. Therefore, we aim to directly perform fully automated mass segmentation in whole mammograms with deep learning solutions.
Methods
In this work, we propose a novel dual contextual affinity network (a.k.a., DCANet) for mass segmentation in whole mammograms. Based on the encoder–decoder structure, two lightweight yet effective contextual affinity modules including the global‐guided affinity module (GAM) and the local‐guided affinity module (LAM) are proposed. The former aggregates the features integrated by all positions and captures long‐range contextual dependencies, aiming to enhance the feature representations of homogeneous regions. The latter emphasizes semantic information around each position and exploits contextual affinity based on the local field‐of‐view, aiming to improve the indistinction among heterogeneous regions.
Results
The proposed DCANet is greatly demonstrated on two public mammographic databases including the DDSM and the INbreast, achieving the Dice similarity coefficient (DSC) of 85.95% and 84.65%, respectively. Both segmentation performance and computational efficiency outperform the current state‐of‐the‐art methods.
Conclusion
According to extensive qualitative and quantitative analyses, we believe that the proposed fully automated approach has sufficient robustness to provide fast and accurate diagnoses for possible clinical breast mass segmentation.